Previous Issue
Volume 14, October
 
 

Computers, Volume 14, Issue 11 (November 2025) – 31 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
21 pages, 17739 KB  
Article
Re_MGFE: A Multi-Scale Global Feature Embedding Spectrum Sensing Method Based on Relation Network
by Jiayi Wang, Fan Zhou, Jinyang Ren, Lizhuang Tan, Jian Wang, Peiying Zhang and Shaolin Liao
Computers 2025, 14(11), 480; https://doi.org/10.3390/computers14110480 - 4 Nov 2025
Abstract
Currently, the increasing number of Internet of Things devices makes spectrum resource shortage prominent. Spectrum sensing technology can effectively solve this problem by conducting real-time monitoring of the spectrum. However, in practical applications, it is difficult to obtain a large number of labeled [...] Read more.
Currently, the increasing number of Internet of Things devices makes spectrum resource shortage prominent. Spectrum sensing technology can effectively solve this problem by conducting real-time monitoring of the spectrum. However, in practical applications, it is difficult to obtain a large number of labeled samples, which leads to the neural network model not being fully trained and affects the performance. Moreover, the existing few-shot methods focus on capturing spatial features, ignoring the representation forms of features at different scales, thus reducing the diversity of features. To address the above issues, this paper proposes a few-shot spectrum sensing method based on multi-scale global feature. To enhance the feature diversity, this method employs a multi-scale feature extractor to extract features at multiple scales. This improves the model’s ability to distinguish signals and avoids overfitting of the network. In addition, to make full use of the frequency features at different scales, a learnable weight feature reinforcer is constructed to enhance the frequency features. The simulation results show that, when SNR is under 0∼10 dB, the recognition accuracy of the network under different task modes all reaches above 81%, which is better than the existing methods. It realizes the accurate spectrum sensing under the few-shot conditions. Full article
(This article belongs to the Section Internet of Things (IoT) and Industrial IoT)
Show Figures

Figure 1

16 pages, 34458 KB  
Article
A Mixed Reality-Based Training and Guidance System for Quality Control
by Luzia Saraiva, João Henriques, José Silva, André Barbosa and Serafim M. Oliveira
Computers 2025, 14(11), 479; https://doi.org/10.3390/computers14110479 - 3 Nov 2025
Abstract
The increasing demand for customized products has raised the significant challenges of increasing performance and reducing costs in the industry. Facing that demand requires operators to enhance their capabilities to cope with complexity, demanding skills, and higher cognitive levels, performance, and errors. To [...] Read more.
The increasing demand for customized products has raised the significant challenges of increasing performance and reducing costs in the industry. Facing that demand requires operators to enhance their capabilities to cope with complexity, demanding skills, and higher cognitive levels, performance, and errors. To overcome this scenario, a virtual instructor framework is proposed to instruct operators and support procedural quality, enabled by the use of You Only Look Once (YOLO) models and by equipping the operators with Magic Leap 2 as a Head-Mounted Display (HMD). The framework relies on key modules, such as Instructor, Management, Core, Object Detection, 3D Modeling, and Storage. A use case in the automotive industry helped validate the Proof-of-concept (PoC) of the proposed framework. This framework can contribute to guiding the development of new tools supporting assembly operations in the industry. Full article
20 pages, 689 KB  
Article
Constrained Object Hierarchies as a Unified Theoretical Model for Intelligence and Intelligent Systems
by Harris Wang
Computers 2025, 14(11), 478; https://doi.org/10.3390/computers14110478 - 3 Nov 2025
Abstract
Achieving Artificial General Intelligence (AGI) requires a unified framework capable of modeling the full spectrum of intelligent behavior—from logical reasoning and sensory perception to emotional regulation and collective decision-making. This paper proposes Constrained Object Hierarchies (COH), a neuroscience-inspired theoretical model that represents intelligent [...] Read more.
Achieving Artificial General Intelligence (AGI) requires a unified framework capable of modeling the full spectrum of intelligent behavior—from logical reasoning and sensory perception to emotional regulation and collective decision-making. This paper proposes Constrained Object Hierarchies (COH), a neuroscience-inspired theoretical model that represents intelligent systems as hierarchical compositions of objects governed by symbolic structure, neural adaptation, and constraint-based control. Each object is formally defined by a 9-tuple structure: O=(C,A,M,N,E,I,T,G,D), encapsulating its Components, Attributes, Methods, Neural components, Embedding, and governing Identity constraints, Trigger constraints, Goal constraints, and Constraint Daemons. To demonstrate the scope and versatility of COH, we formalize nine distinct intelligence types—including computational, perceptual, motor, affective, and embodied intelligence—each with detailed COH parameters and implementation blueprints. To operationalize the framework, we introduce GISMOL, a Python-based toolkit for instantiating COH objects and executing their constraint systems and neural components. GISMOL supports modular development and integration of intelligent agents, enabling a structured methodology for AGI system design. By unifying symbolic and connectionist paradigms within a constraint-governed architecture, COH provides a scalable and explainable foundation for building general purpose intelligent systems. A comprehensive summary of the research contributions is presented right after the introduction. Full article
Show Figures

Figure 1

25 pages, 2322 KB  
Article
Enhancing Cyberattack Prevention Through Anomaly Detection Ensembles and Diverse Training Sets
by Faisal Saleem S Alraddadi, Luis F. Lago-Fernández and Francisco B. Rodríguez
Computers 2025, 14(11), 477; https://doi.org/10.3390/computers14110477 - 3 Nov 2025
Abstract
A surge in global connectivity has led to an increase in cyberattacks, creating a need for improved security. A promising area of research is using machine learning to detect these attacks. Traditional two-class machine learning models can be ineffective for real-time detection, as [...] Read more.
A surge in global connectivity has led to an increase in cyberattacks, creating a need for improved security. A promising area of research is using machine learning to detect these attacks. Traditional two-class machine learning models can be ineffective for real-time detection, as attacks often represent a minority of traffic (anomaly) and fluctuate with time. This comparative study uses an ensemble of one-class classification models. First, we employed an ensemble of autoencoders with randomly generated architectures to enhance the dynamic detection of attacks, enabling each model to learn distinct aspects of the data distribution. The term ‘dynamic’ reflects the ensemble’s superior responsiveness to different attack rates without the need for retraining, offering enhanced performance compared to a static average of individual models, which we refer to as the baseline approach. Second, for comparison with the ensemble of autoencoders, we employ an ensemble of isolation forests, which also improves dynamic attack detection. We evaluated our ensemble models using the NSL-KDD dataset, testing them without the need for retraining with varying attack ratios, and comparing the results with the baseline method. Then, we investigated the impact of training data overlap among ensemble components and its effect on the detection of extremely low attack rates. The objective is to train each model within the ensemble with the minimal amount of data necessary to detect malicious traffic across varying attack rates effectively. Based on the conclusions drawn from our initial study using the NSL-KDD dataset, we re-evaluated our strategy with a modern dataset, CIC_IoT-2023, which also achieved good performance in detecting various attack rates using an ensemble of simple autoencoder models. Finally, we have observed that when distributing normal traffic data among ensemble components with a small overlap, the results show enhanced overall performance. Full article
(This article belongs to the Special Issue Using New Technologies in Cyber Security Solutions (2nd Edition))
Show Figures

Figure 1

23 pages, 3017 KB  
Article
Real-Time Passenger Flow Analysis in Tram Stations Using YOLO-Based Computer Vision and Edge AI on Jetson Nano
by Sonia Diaz-Santos, Pino Caballero-Gil and Cándido Caballero-Gil
Computers 2025, 14(11), 476; https://doi.org/10.3390/computers14110476 - 3 Nov 2025
Abstract
Efficient real-time computer vision-based passenger flow analysis is increasingly important for the management of intelligent transportation systems and smart cities. This paper presents the design and implementation of a system for real-time object detection, tracking, and people counting in tram stations. The proposed [...] Read more.
Efficient real-time computer vision-based passenger flow analysis is increasingly important for the management of intelligent transportation systems and smart cities. This paper presents the design and implementation of a system for real-time object detection, tracking, and people counting in tram stations. The proposed approach integrates YOLO-based detection with a lightweight tracking module and is deployed on an NVIDIA Jetson Nano device, enabling operation under resource constraints and demonstrating the potential of edge AI. Multiple YOLO versions, from v3 to v11, were evaluated on data collected in collaboration with Metropolitano de Tenerife. Experimental results show that YOLOv5s achieves the best balance between detection accuracy and inference speed, reaching 96.85% accuracy in counting tasks. The system demonstrates the feasibility of applying edge AI to monitor passenger flow in real time, contributing to intelligent transportation and smart city initiatives. Full article
Show Figures

Figure 1

26 pages, 1572 KB  
Article
Pulse-Driven Spin Paradigm for Noise-Aware Quantum Classification
by Carlos Riascos-Moreno, Andrés Marino Álvarez-Meza and German Castellanos-Dominguez
Computers 2025, 14(11), 475; https://doi.org/10.3390/computers14110475 - 1 Nov 2025
Viewed by 93
Abstract
Quantum machine learning (QML) integrates quantum computing with classical machine learning. Within this domain, QML-CQ classification tasks, where classical data is processed by quantum circuits, have attracted particular interest for their potential to exploit high-dimensional feature maps, entanglement-enabled correlations, and non-classical priors. Yet, [...] Read more.
Quantum machine learning (QML) integrates quantum computing with classical machine learning. Within this domain, QML-CQ classification tasks, where classical data is processed by quantum circuits, have attracted particular interest for their potential to exploit high-dimensional feature maps, entanglement-enabled correlations, and non-classical priors. Yet, practical realizations remain constrained by the Noisy Intermediate-Scale Quantum (NISQ) era, where limited qubit counts, gate errors, and coherence losses necessitate frugal, noise-aware strategies. The Data Re-Uploading (DRU) algorithm has emerged as a strong NISQ-compatible candidate, offering universal classification capabilities with minimal qubit requirements. While DRU has been experimentally demonstrated on ion-trap, photonic, and superconducting platforms, no implementations exist for spin-based quantum processing units (QPU-SBs), despite their scalability potential via CMOS-compatible fabrication and recent demonstrations of multi-qubit processors. Here, we present a pulse-level, noise-aware DRU framework for spin-based QPUs, designed to bridge the gap between gate-level models and realistic spin-qubit execution. Our approach includes (i) compiling DRU circuits into hardware-proximate, time-domain controls derived from the Loss–DiVincenzo Hamiltonian, (ii) explicitly incorporating coherent and incoherent noise sources through pulse perturbations and Lindblad channels, (iii) enabling systematic noise-sensitivity studies across one-, two-, and four-spin configurations via continuous-time simulation, and (iv) developing a noise-aware training pipeline that benchmarks gate-level baselines against spin-level dynamics using information-theoretic loss functions. Numerical experiments show that our simulations reproduce gate-level dynamics with fidelities near unity while providing a richer error characterization under realistic noise. Moreover, divergence-based losses significantly enhance classification accuracy and robustness compared to fidelity-based metrics. Together, these results establish the proposed framework as a practical route for advancing DRU on spin-based platforms and motivate future work on error-attentive training and spin–quantum-dot noise modeling. Full article
Show Figures

Figure 1

5 pages, 158 KB  
Editorial
Uncertainty-Aware Artificial Intelligence: Editorial
by H. M. Dipu Kabir and Subrota Kumar Mondal
Computers 2025, 14(11), 474; https://doi.org/10.3390/computers14110474 - 1 Nov 2025
Viewed by 93
Abstract
Artificial Intelligence (AI) has revolutionized the way we think, perceive, and interact, delivering remarkable advances across domains ranging from computer vision and natural language processing to healthcare, power, finance, autonomy, and philosophies [...] Full article
14 pages, 1345 KB  
Article
Fair and Energy-Efficient Charging Resource Allocation for Heterogeneous UGV Fleets
by Dimitris Ziouzios, Nikolaos Baras, Minas Dasygenis and Constantinos Tsanaktsidis
Computers 2025, 14(11), 473; https://doi.org/10.3390/computers14110473 - 1 Nov 2025
Viewed by 112
Abstract
This paper addresses the critical challenge of energy management for autonomous robots in the context of large-scale photovoltaic parks. The dynamic and vast nature of these environments, characterized by dense, structured rows of solar panels, introduces unique complexities, including uneven terrain, varied operational [...] Read more.
This paper addresses the critical challenge of energy management for autonomous robots in the context of large-scale photovoltaic parks. The dynamic and vast nature of these environments, characterized by dense, structured rows of solar panels, introduces unique complexities, including uneven terrain, varied operational demands, and the need for equitable resource allocation among diverse robot fleets. The presented framework adapts and significantly extends the Affinity Propagation algorithm for strategic charging station placement within photovoltaic parks. The key contributions include: (1) a multi-attribute grid-based environment model that quantifies terrain difficulty and panel-specific obstacles; (2) an extended multi-factor scoring function that incorporates penalties for terrain inaccessibility and proximity to sensitive photovoltaic infrastructure; (3) a sophisticated, energy-aware consumption model that accounts for terrain friction, slope, and rolling resistance; and (4) a novel multi-agent fairness constraint that ensures equitable access to charging resources across heterogeneous robot sub-fleets. Through extensive simulations on synthesized photovoltaic park environments, it is demonstrated that the enhanced algorithm not only significantly reduces travel distance and energy consumption but also promotes a fairer, more efficient operational ecosystem, paving the way for scalable and sustainable robotic maintenance and inspection. Full article
(This article belongs to the Special Issue Advanced Human–Robot Interaction 2025)
Show Figures

Figure 1

33 pages, 5642 KB  
Article
Feature-Optimized Machine Learning Approaches for Enhanced DDoS Attack Detection and Mitigation
by Ahmed Jamal Ibrahim, Sándor R. Répás and Nurullah Bektaş
Computers 2025, 14(11), 472; https://doi.org/10.3390/computers14110472 - 1 Nov 2025
Viewed by 147
Abstract
Distributed denial of service (DDoS) attacks pose a serious risk to the operational stability of a network for companies, often leading to service disruptions and financial damage and a loss of trust and credibility. The increasing sophistication and scale of these threats highlight [...] Read more.
Distributed denial of service (DDoS) attacks pose a serious risk to the operational stability of a network for companies, often leading to service disruptions and financial damage and a loss of trust and credibility. The increasing sophistication and scale of these threats highlight the pressing need for advanced mitigation strategies. Despite the numerous existing studies on DDoS detection, many rely on large, redundant feature sets and lack validation for real-time applicability, leading to high computational complexity and limited generalization across diverse network conditions. This study addresses this gap by proposing a feature-optimized and computationally efficient ML framework for DDoS detection and mitigation using benchmark dataset. The proposed approach serves as a foundational step toward developing a low complexity model suitable for future real-time and hardware-based implementation. The dataset was systematically preprocessed to identify critical parameters, such as packet length Min, Total Backward Packets, Avg Fwd Segment Size, and others. Several ML algorithms, involving Logistic Regression, Decision Tree, Random Forest, Gradient Boosting, and Cat-Boost, are applied to develop models for detecting and mitigating abnormal network traffic. The developed ML model demonstrates high performance, achieving 99.78% accuracy with Decision Tree and 99.85% with Random Forest, representing improvements of 1.53% and 0.74% compared to previous work, respectively. In addition, the Decision Tree algorithm achieved 99.85% accuracy for mitigation. with an inference time as low as 0.004 s, proving its suitability for identifying DDoS attacks in real time. Overall, this research presents an effective approach for DDoS detection, emphasizing the integration of ML models into existing security systems to enhance real-time threat mitigation. Full article
Show Figures

Figure 1

20 pages, 4855 KB  
Article
A Multi-Step PM2.5 Time Series Forecasting Approach for Mining Areas Using Last Day Observed, Correlation-Based Retrieval, and Interpolation
by Anibal Flores, Hugo Tito-Chura, Jose Guzman-Valdivia, Ruso Morales-Gonzales, Eduardo Flores-Quispe and Osmar Cuentas-Toledo
Computers 2025, 14(11), 471; https://doi.org/10.3390/computers14110471 - 1 Nov 2025
Viewed by 72
Abstract
Monitoring PM2.5 in mining areas is essential for air quality management; however, most studies focus on single-step forecasts, limiting timely decision making. This work addresses the need for accurate multi-step PM2.5 prediction to support proactive pollution control in mining regions. So, a new [...] Read more.
Monitoring PM2.5 in mining areas is essential for air quality management; however, most studies focus on single-step forecasts, limiting timely decision making. This work addresses the need for accurate multi-step PM2.5 prediction to support proactive pollution control in mining regions. So, a new model for multi-step PM2.5 time series forecasting is proposed, which is based on historical data such as the last day observed (LDO), retrieved data by correlation levels, and linear interpolation. As case studies, data from three environmental monitoring stations in mining areas of Peru were considered: Tala station near the Cuajone mine, Uchumayo near the Cerro Verde mine, and Espinar near the Tintaya mine. The proposed model was compared with benchmark models, including Long Short-Term Memory (LSTM), Bidirectional LSTM (BiLSTM), Gated Recurrent Unit (GRU), and Bidirectional GRU (BiGRU). The results show that the proposed model achieves results similar to those obtained by the benchmark models. The main advantages of the proposed model over the benchmark models lie in the amount of data required for predictions and the training time, which represents less than 0.2% of that required by deep learning-based models. Full article
Show Figures

Figure 1

14 pages, 3063 KB  
Article
Detecting Visualized Malicious Code Through Low-Redundancy Convolution
by Xiao Liu, Jiawang Liu, Yingying Ren and Jining Chen
Computers 2025, 14(11), 470; https://doi.org/10.3390/computers14110470 - 1 Nov 2025
Viewed by 70
Abstract
The proliferation of sophisticated malware poses a persistent threat to cybersecurity. While visualizing malware as images enables the use of Convolutional Neural Networks, standard architectures are often inefficient and struggle with the high spatial and channel redundancy inherent in these representations. To address [...] Read more.
The proliferation of sophisticated malware poses a persistent threat to cybersecurity. While visualizing malware as images enables the use of Convolutional Neural Networks, standard architectures are often inefficient and struggle with the high spatial and channel redundancy inherent in these representations. To address this challenge, we propose LR-MalConv, a new detection framework centered on a novel Low-Redundancy Convolution (LR-Conv) module. The LR-Conv module is uniquely designed to synergistically reduce both spatial redundancy, via a gating and reconstruction mechanism, and channel redundancy, through an efficient split–transform–fuse strategy. By integrating LR-Conv into a ResNet backbone, our framework enhances discriminative feature extraction while significantly reducing computational overhead. Extensive experiments on the Malimg benchmark dataset show our method achieves an accuracy of 99.52%, outperforming existing methods. LR-MalConv establishes a new benchmark for visualized malware detection by striking a superior balance between accuracy and computational efficiency, demonstrating the significant potential of redundancy reduction in this domain. Full article
(This article belongs to the Section ICT Infrastructures for Cybersecurity)
Show Figures

Figure 1

21 pages, 4007 KB  
Article
Computer Vision-Driven Framework for IoT-Enabled Basketball Score Tracking
by Ivan Ćirić, Nikola Ivačko, Miljana Milić, Petar Ristić and Dušan Krstić
Computers 2025, 14(11), 469; https://doi.org/10.3390/computers14110469 - 1 Nov 2025
Viewed by 170
Abstract
This paper presents the design and implementation of a vision-based score detection system tailored for smart IoT basketball applications. The proposed architecture leverages a compact, low-cost device comprising a high-resolution overhead camera and a Raspberry Pi 5 microprocessor equipped with a hardware accelerator [...] Read more.
This paper presents the design and implementation of a vision-based score detection system tailored for smart IoT basketball applications. The proposed architecture leverages a compact, low-cost device comprising a high-resolution overhead camera and a Raspberry Pi 5 microprocessor equipped with a hardware accelerator for real-time object detection. The detection pipeline integrates convolutional neural networks (YOLO-based) and custom preprocessing techniques to localize the basketball hoop and track the ball trajectory. A scoring event is confirmed when the ball enters the defined scoring zone with downward motion over multiple frames, effectively reducing false positives caused by occlusions, multiple balls, or irregular shot directions. The system is part of a scalable IoT analytics platform known as Koško, which provides real-time statistics, leaderboards, and user engagement tools through a web-based interface. Field tests were conducted using data collected from various public and school courts across Niš, Serbia, resulting in a robust and adaptable solution for automated basketball score monitoring in both indoor and outdoor environments. The methodology supports edge computing, multilingual deployment, and integration with smart coaching and analytics systems. Full article
(This article belongs to the Special Issue AI in Complex Engineering Systems)
Show Figures

Figure 1

20 pages, 557 KB  
Article
Algorithm for Obtaining Complete Irreducible Polynomials over Given Galois Field for New Method of Digital Monitoring of Information Space
by Dina Shaltykova, Aliya Massalimova, Yelizaveta Vitulyova and Ibragim Suleimenov
Computers 2025, 14(11), 468; https://doi.org/10.3390/computers14110468 - 1 Nov 2025
Viewed by 108
Abstract
Irreducible polynomials are widely used in modern cryptography; however, algorithms for finding such polynomials remain quite complex and require significant computational resources. In this study, a new approach to finding irreducible equations over Galois fields GF(p) is proposed. It [...] Read more.
Irreducible polynomials are widely used in modern cryptography; however, algorithms for finding such polynomials remain quite complex and require significant computational resources. In this study, a new approach to finding irreducible equations over Galois fields GF(p) is proposed. It is shown that such irreducible equations can be obtained by solving a system of linear equations over the base Galois field, generated by any element of the field GFpK that is distinct from the elements of the base field and from elements corresponding to lower-degree extensions. The connection of the proposed approach with algorithms based on the Frobenius automorphism is established. The case corresponding to the field GF(3) and matrices over this field is examined in detail. It has been shown that the proposed method makes it possible to obtain complete sets of irreducible polynomials over a given Galois field. It has also been demonstrated that generating such sets is of particular interest for the development of new methods of digital monitoring of the information space, which are based on analogies with error-correcting coding techniques. Full article
Show Figures

Figure 1

36 pages, 4464 KB  
Article
Efficient Image-Based Memory Forensics for Fileless Malware Detection Using Texture Descriptors and LIME-Guided Deep Learning
by Qussai M. Yaseen, Esraa Oudat, Monther Aldwairi and Salam Fraihat
Computers 2025, 14(11), 467; https://doi.org/10.3390/computers14110467 - 1 Nov 2025
Viewed by 131
Abstract
Memory forensics is an essential cybersecurity tool that comprehensively examines volatile memory to detect the malicious activity of fileless malware that can bypass disk analysis. Image-based detection techniques provide a promising solution by visualizing memory data into images to be used and analyzed [...] Read more.
Memory forensics is an essential cybersecurity tool that comprehensively examines volatile memory to detect the malicious activity of fileless malware that can bypass disk analysis. Image-based detection techniques provide a promising solution by visualizing memory data into images to be used and analyzed by image processing tools and machine learning methods. However, the effectiveness of image-based data for detection and classification requires high computational efforts. This paper investigates the efficacy of texture-based methods in detecting and classifying memory-resident or fileless malware using different image resolutions, identifying the best feature descriptors, classifiers, and resolutions that accurately classify malware into specific families and differentiate them from benign software. Moreover, this paper uses both local and global descriptors, where local descriptors include Oriented FAST and Rotated BRIEF (ORB), Scale-Invariant Feature Transform (SIFT), and Histogram of Oriented Gradients (HOG) and global descriptors include Discrete Wavelet Transform (DWT), GIST, and Gray Level Co-occurrence Matrix (GLCM). The results indicate that as image resolution increases, most feature descriptors yield more discriminative features but require higher computational efforts in terms of time and processing resources. To address this challenge, this paper proposes a novel approach that integrates Local Interpretable Model-agnostic Explanations (LIME) with deep learning models to automatically identify and crop the most important regions of memory images. The LIME’s ROI was extracted based on ResNet50 and MobileNet models’ predictions separately, the images were resized to 128 × 128, and the sampling process was performed dynamically to speed up LIME computation. The ROIs of the images are cropped to new images with sizes of (100 × 100) in two stages: the coarse stage and the fine stage. The two generated LIME-based cropped images using ResNet50 and MobileNet are fed to the lightweight neural network to evaluate the effectiveness of the LIME-based identified regions. The results demonstrate that the LIME-based MobileNet model’s prediction improves the efficiency of the model by preserving important features with a classification accuracy of 85% on multi-class classification. Full article
(This article belongs to the Special Issue Using New Technologies in Cyber Security Solutions (2nd Edition))
Show Figures

Figure 1

31 pages, 5390 KB  
Article
Artificial Intelligence-Driven Mobile Platform for Thermographic Imaging to Support Maternal Health Care
by Lucas Miguel Iturriago-Salas, Jeison Andres Mesa-Sarmiento, Paola Alexandra Castro-Cabrera, Andrés Marino Álvarez-Meza and German Castellanos-Dominguez
Computers 2025, 14(11), 466; https://doi.org/10.3390/computers14110466 - 1 Nov 2025
Viewed by 208
Abstract
Maternal health care during labor requires the continuous and reliable monitoring of analgesic procedures, yet conventional systems are often subjective, indirect, and operator-dependent. Infrared thermography (IRT) offers a promising non-invasive approach for labor epidural analgesia (LEA) monitoring, but its practical implementation is hindered [...] Read more.
Maternal health care during labor requires the continuous and reliable monitoring of analgesic procedures, yet conventional systems are often subjective, indirect, and operator-dependent. Infrared thermography (IRT) offers a promising non-invasive approach for labor epidural analgesia (LEA) monitoring, but its practical implementation is hindered by clinical and hardware limitations. This work presents a novel artificial intelligence-driven mobile platform to overcome these hurdles. The proposed solution integrates a lightweight deep learning model for semantic segmentation, a B-spline-based free-form deformation (FFD) approach for non-rigid dermatome registration, and efficient on-device inference. Our analysis identified a U-Net with a MobileNetV3 backbone as the optimal architecture, achieving a high Dice score of 0.97 and a 4.5% intersection over union (IoU) gain over heavier backbones while being 73% more parameter-efficient. The entire AI pipeline is deployed on a commercial smartphone via TensorFlow Lite, achieving an on-device inference time of approximately two seconds per image. Deployed within a user-friendly interface, our approach provides straightforward feedback to support decision making in labor management. By integrating thermal imaging with deep learning and mobile deployment, the proposed system provides a practical solution to enhance maternal care. By offering a quantitative, automated tool, this work demonstrates a viable pathway to augment or replace subjective clinical assessments with objective, data-driven monitoring, bridging the gap between advanced AI research and point-of-care practice in obstetric anesthesia. Full article
(This article belongs to the Special Issue Machine Learning: Innovation, Implementation, and Impact)
Show Figures

Figure 1

35 pages, 8683 KB  
Article
Teaching Machine Learning to Undergraduate Electrical Engineering Students
by Gerald Fudge, Anika Rimu, William Zorn, July Ringle and Cody Barnett
Computers 2025, 14(11), 465; https://doi.org/10.3390/computers14110465 - 28 Oct 2025
Viewed by 373
Abstract
Proficiency in machine learning (ML) and the associated computational math foundations have become critical skills for engineers. Required areas of proficiency include the ability to use available ML tools and the ability to develop new tools to solve engineering problems. Engineers also need [...] Read more.
Proficiency in machine learning (ML) and the associated computational math foundations have become critical skills for engineers. Required areas of proficiency include the ability to use available ML tools and the ability to develop new tools to solve engineering problems. Engineers also need to be proficient in using generative artificial intelligence (AI) tools in a variety of contexts, including as an aid to learning, research, writing, and code generation. Using these tools properly requires a solid understanding of the associated computational math foundation. Without this foundation, engineers will struggle with developing new tools and can easily misuse available ML/AI tools, leading to poorly designed systems that are suboptimal or even harmful to society. Teaching (and learning) these skills can be difficult due to the breadth of skills required. One contribution of this paper is that it approaches teaching this topic within an industrial engineering human factors framework. Another contribution is the detailed case study narrative describing specific pedagogical challenges, including implementation of teaching strategies (successful and unsuccessful), recent observed trends in generative AI, and student perspectives on learning this topic. Although the primary methodology is anecdotal, we also include empirical data in support of anecdotal results. Full article
(This article belongs to the Special Issue STEAM Literacy and Computational Thinking in the Digital Era)
Show Figures

Figure 1

21 pages, 783 KB  
Article
SACW: Semi-Asynchronous Federated Learning with Client Selection and Adaptive Weighting
by Shuaifeng Li, Fangfang Shan, Shiqi Mao, Yanlong Lu, Fengjun Miao and Zhuo Chen
Computers 2025, 14(11), 464; https://doi.org/10.3390/computers14110464 - 27 Oct 2025
Viewed by 216
Abstract
Federated learning (FL), as a privacy-preserving distributed machine learning paradigm, demonstrates unique advantages in addressing data silo problems. However, the prevalent statistical heterogeneity (data distribution disparities) and system heterogeneity (device capability variations) in practical applications significantly hinder FL performance. Traditional synchronous FL suffers [...] Read more.
Federated learning (FL), as a privacy-preserving distributed machine learning paradigm, demonstrates unique advantages in addressing data silo problems. However, the prevalent statistical heterogeneity (data distribution disparities) and system heterogeneity (device capability variations) in practical applications significantly hinder FL performance. Traditional synchronous FL suffers from severe waiting delays due to its mandatory synchronization mechanism, while asynchronous approaches incur model bias issues caused by training pace discrepancies. To tackle these challenges, this paper proposes the SACW framework, which effectively balances training efficiency and model quality through a semi-asynchronous training mechanism. The framework adopts a hybrid strategy of “asynchronous client training–synchronous server aggregation,” combined with an adaptive weighting algorithm based on model staleness and data volume. This approach significantly improves system resource utilization and mitigates system heterogeneity. Simultaneously, the server employs data distribution-aware client clustering and hierarchical selection strategies to construct a training environment characterized by “inter-cluster heterogeneity and intra-cluster homogeneity.” Representative clients from each cluster are selected to participate in model aggregation, thereby addressing data heterogeneity. We conduct comprehensive comparisons with mainstream synchronous and asynchronous FL methods and perform extensive experiments across various model architectures and datasets. The results demonstrate that SACW achieves better performance in both training efficiency and model accuracy under scenarios with system and data heterogeneity. Full article
Show Figures

Figure 1

30 pages, 2362 KB  
Article
Bridging the Gap: Enhancing BIM Education for Sustainable Design Through Integrated Curriculum and Student Perception Analysis
by Tran Duong Nguyen and Sanjeev Adhikari
Computers 2025, 14(11), 463; https://doi.org/10.3390/computers14110463 - 25 Oct 2025
Viewed by 385
Abstract
Building Information Modeling (BIM) is a transformative tool in Sustainable Design (SD), providing measurable benefits for efficiency, collaboration, and performance in architectural, engineering, and construction (AEC) practices. Despite its growing presence in academic curricula, a gap persists between students’ recognition of BIM’s sustainability [...] Read more.
Building Information Modeling (BIM) is a transformative tool in Sustainable Design (SD), providing measurable benefits for efficiency, collaboration, and performance in architectural, engineering, and construction (AEC) practices. Despite its growing presence in academic curricula, a gap persists between students’ recognition of BIM’s sustainability potential and their confidence or ability to apply these concepts in real-world practice. This study examines students’ understanding and perceptions of BIM and Sustainable Design education, offering insights for enhancing curriculum integration and pedagogical strategies. The objectives are to: (1) assess students’ current understanding of BIM and Sustainable Design; (2) identify gaps and misconceptions in applying BIM to sustainability; (3) evaluate the effectiveness of existing teaching methods and curricula to inform future improvements; and (4) explore the alignment between students’ theoretical knowledge and practical abilities in using BIM for Sustainable Design. The research methodology includes a comprehensive literature review and a survey of 213 students from architecture and construction management programs. Results reveal that while most students recognize the value of BIM for early-stage sustainable design analysis, many lack confidence in their practical skills, highlighting a perception–practice gap. The paper examines current educational practices, identifies curriculum shortcomings, and proposes strategies, such as integrated, hands-on learning experiences, to better align academic instruction with industry needs. Distinct from previous studies that focused primarily on single-discipline or software-based training, this research provides an empirical, cross-program analysis of students’ perception–practice gaps and offers curriculum-level insights for sustainability-driven practice. These findings provide practical recommendations for enhancing BIM and sustainability education, thereby better preparing students to meet the demands of the evolving AEC sector. Full article
Show Figures

Figure 1

30 pages, 2162 KB  
Article
Decision Support for Cargo Pickup and Delivery Under Uncertainty: A Combined Agent-Based Simulation and Optimization Approach
by Renan Paula Ramos Moreno, Rui Borges Lopes, Ana Luísa Ramos, José Vasconcelos Ferreira, Diogo Correia and Igor Eduardo Santos de Melo
Computers 2025, 14(11), 462; https://doi.org/10.3390/computers14110462 - 25 Oct 2025
Viewed by 703
Abstract
This article introduces an innovative hybrid methodology that integrates deterministic Mixed-Integer Linear Programming optimization with stochastic Agent-Based Simulation to address the PDP-TW. The approach is applied to real-world operational data from a luggage-handling company in Lisbon, covering 158 service requests from January 2025. [...] Read more.
This article introduces an innovative hybrid methodology that integrates deterministic Mixed-Integer Linear Programming optimization with stochastic Agent-Based Simulation to address the PDP-TW. The approach is applied to real-world operational data from a luggage-handling company in Lisbon, covering 158 service requests from January 2025. The MILP model generates optimal routing and task allocation plans, which are subsequently stress-tested under realistic uncertainties, such as variability in travel and service times, using ABS implemented in AnyLogic. The framework is iterative: violations of temporal or capacity constraints identified during the simulation are fed back into the optimization model, enabling successive adjustments until robust and feasible solutions are achieved for real-world scenarios. Additionally, the study incorporates transshipment scenarios, evaluating the impact of using warehouses as temporary hubs for order redistribution. Results include a comparative analysis between deterministic and stochastic models regarding operational efficiency, time window adherence, reduction in travel distances, and potential decreases in CO2 emissions. This work provides a contribution to the literature by proposing a practical and robust decision-support framework aligned with contemporary demands for sustainability and efficiency in urban logistics, overcoming the limitations of purely deterministic approaches by explicitly reflecting real-world uncertainties. Full article
(This article belongs to the Special Issue Operations Research: Trends and Applications)
Show Figures

Figure 1

20 pages, 2473 KB  
Article
Approaching Challenges in Representations of Date–Time Ambiguities
by Amer Harb, Kamilla Klonowska and Daniel Einarson
Computers 2025, 14(11), 461; https://doi.org/10.3390/computers14110461 - 24 Oct 2025
Viewed by 275
Abstract
Inconsistencies in Earth’s spinning, changes in calendar systems, etc., necessitate time being represented correspondingly. Date–time handling in programming involves specific challenges, including conflicts between calendars, time zone discrepancies, daylight savings, and leap second adjustments—issues that other data types like numbers and text do [...] Read more.
Inconsistencies in Earth’s spinning, changes in calendar systems, etc., necessitate time being represented correspondingly. Date–time handling in programming involves specific challenges, including conflicts between calendars, time zone discrepancies, daylight savings, and leap second adjustments—issues that other data types like numbers and text do not encounter. This article identifies these challenges and investigates existing approaches to date–time representation. Limitations in current systems, including how leap seconds, time zone variations, and inconsistent calendar representations complicate date–time handling, is examined. Inconsistent date–time representations imply significant challenges, especially when considering the interplay of leap seconds and time zone shifts. This study highlights the need for a new approach to date–time data types addressing these problems effectively. The article reviews existing date–time data types and explores their shortcomings, proposing a theoretical framework for a more robust solution. The study suggests that an improved date–time data type could enhance time resolution, support leap seconds, and offer greater flexibility in handling time zone shifts. Such a solution would provide a more reliable alternative to current systems. By addressing issues like leap second handling and time zone shifts, the proposed framework demonstrates the feasibility of a new date–time data type, with potential for broader adoption in future systems. Full article
Show Figures

Figure 1

18 pages, 770 KB  
Article
Emotion in Words: The Role of Ed Sheeran and Sia’s Lyrics on the Musical Experience
by Catarina Travanca, Mónica Cruz and Abílio Oliveira
Computers 2025, 14(11), 460; https://doi.org/10.3390/computers14110460 - 24 Oct 2025
Viewed by 348
Abstract
Music plays an increasingly vital role in modern society, becoming a fundamental part of everyday life. Beyond entertainment, it contributes to emotional well-being by helping individuals express their feelings, process emotions, and find comfort during different life moments. This study explores the emotional [...] Read more.
Music plays an increasingly vital role in modern society, becoming a fundamental part of everyday life. Beyond entertainment, it contributes to emotional well-being by helping individuals express their feelings, process emotions, and find comfort during different life moments. This study explores the emotional impact of Ed Sheeran’s lyrics and Sia’s lyrics on listeners. Using an exploratory approach, it applies a text mining tool to extract data, identify key dimensions, and compare thematic elements across both artists’ work. The analysis reveals distinct emotional patterns and thematic contrasts, offering insight into how their lyrics resonate with audiences on a deeper level. These findings enhance our understanding of the emotional power of contemporary music and highlight how lyrical content can shape listeners’ emotional experiences. Moreover, the study demonstrates the value of text mining as a method for examining popular music, providing a new lens through which to explore the connection between music and emotion. Full article
Show Figures

Figure 1

21 pages, 3607 KB  
Article
Efficient Image Restoration for Autonomous Vehicles and Traffic Systems: A Knowledge Distillation Approach to Enhancing Environmental Perception
by Yongheng Zhang
Computers 2025, 14(11), 459; https://doi.org/10.3390/computers14110459 - 24 Oct 2025
Viewed by 372
Abstract
Image restoration tasks such as deraining, deblurring, and dehazing are crucial for enhancing the environmental perception of autonomous vehicles and traffic systems, particularly for tasks like vehicle detection, pedestrian detection and lane line identification. While transformer-based models excel in these tasks, their prohibitive [...] Read more.
Image restoration tasks such as deraining, deblurring, and dehazing are crucial for enhancing the environmental perception of autonomous vehicles and traffic systems, particularly for tasks like vehicle detection, pedestrian detection and lane line identification. While transformer-based models excel in these tasks, their prohibitive computational complexity hinders real-world deployment on resource-constrained platforms. To bridge this gap, this paper introduces a novel Soft Knowledge Distillation (SKD) framework, designed specifically for creating highly efficient yet powerful image restoration models. Our core innovation is twofold: first, we propose a Multi-dimensional Cross-Net Attention(MCA) mechanism that allows a compact student model to learn comprehensive attention relationships from a large teacher model across both spatial and channel dimensions, capturing fine-grained details essential for high-quality restoration. Second, we pioneer the use of a contrastive learning loss at the reconstruction level, treating the teacher’s outputs as positives and the degraded inputs as negatives, which significantly elevates the student’s reconstruction quality. Extensive experiments demonstrate that our method achieves a superior trade-off between performance and efficiency, notably enhancing downstream tasks like object detection. The primary contributions of this work lie in delivering a practical and compelling solution for real-time perceptual enhancement in autonomous systems, pushing the boundaries of efficient model design. Full article
(This article belongs to the Special Issue Advanced Image Processing and Computer Vision (2nd Edition))
Show Figures

Figure 1

23 pages, 572 KB  
Article
Generative Artificial Intelligence and the Editing of Academic Essays: Necessary and Sufficient Ethical Judgments in Its Use by Higher Education Students
by Antonio Pérez-Portabella, Mario Arias-Oliva, Jorge de Andrés-Sánchez and Graciela Padilla-Castillo
Computers 2025, 14(11), 458; https://doi.org/10.3390/computers14110458 - 24 Oct 2025
Viewed by 295
Abstract
The emergence of generative artificial intelligence (GAI) has significantly transformed higher education. As a linguistic assistant, GAI can promote equity and reduce barriers in academic writing. However, its widespread availability also raises ethical dilemmas about integrity, fairness, and skill development. Despite the growing [...] Read more.
The emergence of generative artificial intelligence (GAI) has significantly transformed higher education. As a linguistic assistant, GAI can promote equity and reduce barriers in academic writing. However, its widespread availability also raises ethical dilemmas about integrity, fairness, and skill development. Despite the growing debate, empirical evidence on how students’ ethical evaluations influence their predicted use of GAI in academic tasks remains scarce. This study analyzes the ethical determinants of students’ determination to use GAI as a linguistic assistant in essay writing. Based on the Multidimensional Ethics Scale (MES), the model incorporates four ethical criteria: moral equity, moral relativism, consequentialism, and deontology. Data were collected from a sample of 151 university students. For the analysis, we used a mix of partial least squares structural equation modeling (PLS-SEM), aimed at testing sufficiency relationships, and necessary condition analysis (NCA), to identify minimum acceptance thresholds or necessary conditions. The PLS-SEM results show that only consequentialism is statistically relevant in explaining the predicted use. Moreover, the NCA reveals that reaching a minimum degree in the evaluations of all ethical constructs is necessary for use to occur. While the necessary condition effect size of moral equity and consequentialism is high, that of relativism and deontology is moderate. Thus, although acceptance of GAI use in the analyzed context increases only when its consequences are perceived as more favorable, for such use to occur it must be considered acceptable, which requires surpassing certain thresholds in all the ethical factors proposed as explanatory. Full article
(This article belongs to the Special Issue Present and Future of E-Learning Technologies (2nd Edition))
Show Figures

Figure 1

16 pages, 297 KB  
Article
Dealing with Class Overlap Through Cluster-Based Sample Weighting
by Patrick Thiam, Friedhelm Schwenker and Hans Armin Kestler
Computers 2025, 14(11), 457; https://doi.org/10.3390/computers14110457 - 24 Oct 2025
Viewed by 261
Abstract
The classification performance of an inference model trained in a supervised manner depends substantially on the size and quality of the labeled training data. The characteristics of the underlying data distribution significantly impact the generalization ability of a trained model, particularly in cases [...] Read more.
The classification performance of an inference model trained in a supervised manner depends substantially on the size and quality of the labeled training data. The characteristics of the underlying data distribution significantly impact the generalization ability of a trained model, particularly in cases where some class overlap can be observed. In such cases, training a single model on the entirety of the labeled data can result in an increase in the complexity of the resulting decision boundary, leading to over-fitting and consequently to some poor generalization performance. In the current work, a cluster-based sample weighting approach is proposed in order to improve the generalization ability of a classification model while dealing with such complex data distributions. The approach consists of first performing a clustering of the training data and subsequently optimizing cluster-specific classification models, using a weighted loss based on the samples-to-cluster-center distances. An unseen sample is first assigned a cluster and subsequently classified based on the model specific to its assigned cluster. The proposed approach was evaluated on three different pain recognition datasets, and the performed evaluation showed that the approach is not only able to attain state-of-the-art classification performances but also systematically outperforms its single model counterpart. Full article
(This article belongs to the Special Issue Multimodal Pattern Recognition of Social Signals in HCI (2nd Edition))
Show Figures

Figure 1

43 pages, 20477 KB  
Article
Investigation of Cybersecurity Bottlenecks of AI Agents in Industrial Automation
by Sami Shrestha, Chipiliro Banda, Amit Kumar Mishra, Fatiha Djebbar and Deepak Puthal
Computers 2025, 14(11), 456; https://doi.org/10.3390/computers14110456 - 23 Oct 2025
Viewed by 554
Abstract
The growth of Agentic AI systems in Industrial Automation has brought forth new cybersecurity issues which in turn put at risk the reliability and integrity of these systems. In this study we look at the cybersecurity issues in industrial automation in terms of [...] Read more.
The growth of Agentic AI systems in Industrial Automation has brought forth new cybersecurity issues which in turn put at risk the reliability and integrity of these systems. In this study we look at the cybersecurity issues in industrial automation in terms of the threats, risks, and vulnerabilities related to Agentic AI. We conducted a systematic literature review to report on the present day practices in terms of cybersecurity for industrial automation and Agentic AI. Also we used a simulation based approach to study the security issues and their impact on industrial automation systems. Our study results identify the key areas of focus and what mitigation strategies may be put in place to secure the integration of Agentic AI in industrial automation. Our research brings to the table results which will play a role in the development of more secure and reliable industrial automation systems, which in the end will improve the overall cybersecurity of these systems. Full article
(This article belongs to the Special Issue AI for Humans and Humans for AI (AI4HnH4AI))
Show Figures

Figure 1

26 pages, 5143 KB  
Article
Research on the Application of Federated Learning Based on CG-WGAN in Gout Staging Prediction
by Junbo Wang, Kaiqi Zhang, Zhibo Guan, Zi Ye, Chao Ma and Hai Huang
Computers 2025, 14(11), 455; https://doi.org/10.3390/computers14110455 - 23 Oct 2025
Viewed by 348
Abstract
Traditional federated learning frameworks face significant challenges posed by non-independent and identically distributed (non-IID) data in the healthcare domain, particularly in multi-institutional collaborative gout staging prediction. Differences in patient population characteristics, distributions of clinical indicators, and proportions of disease stages across hospitals lead [...] Read more.
Traditional federated learning frameworks face significant challenges posed by non-independent and identically distributed (non-IID) data in the healthcare domain, particularly in multi-institutional collaborative gout staging prediction. Differences in patient population characteristics, distributions of clinical indicators, and proportions of disease stages across hospitals lead to inefficient model training, increased category prediction bias, and heightened risks of privacy leakage. In the context of gout staging prediction, these issues result in decreased classification accuracy and recall, especially when dealing with minority classes. To address these challenges, this paper proposes FedCG-WGAN, a federated learning method based on conditional gradient penalization in Wasserstein GAN (CG-WGAN). By incorporating conditional information from gout staging labels and optimizing the gradient penalty mechanism, this method generates high-quality synthetic medical data, effectively mitigating the non-IID problem among clients. Building upon the synthetic data, a federated architecture is further introduced, which replaces traditional parameter aggregation with synthetic data sharing. This enables each client to design personalized prediction models tailored to their local data characteristics, thereby preserving the privacy of original data and avoiding the risk of information leakage caused by reverse engineering of model parameters. Experimental results on a real-world dataset comprising 51,127 medical records demonstrate that the proposed FedCG-WGAN significantly outperforms baseline models, achieving up to a 7.1% improvement in accuracy. Furthermore, by maintaining the composite quality score of the generated data between 0.85 and 0.88, the method achieves a favorable balance between privacy preservation and model utility. Full article
(This article belongs to the Special Issue Mobile Fog and Edge Computing)
Show Figures

Graphical abstract

29 pages, 1324 KB  
Article
HRCD: A Hybrid Replica Method Based on Community Division Under Edge Computing
by Shengyao Sun, Ying Du, Dong Wang, Jiwei Zhang and Shengbin Liang
Computers 2025, 14(11), 454; https://doi.org/10.3390/computers14110454 - 22 Oct 2025
Viewed by 169
Abstract
With the emergence of Industry 5.0 and explosive data growth, replica allocation has become a critical issue in edge computing systems. Current methods often focus on placing replicas on edge servers near terminals, yet this may lead to edge node overload and system [...] Read more.
With the emergence of Industry 5.0 and explosive data growth, replica allocation has become a critical issue in edge computing systems. Current methods often focus on placing replicas on edge servers near terminals, yet this may lead to edge node overload and system performance degradation, especially in large 6G edge computing communities. Meanwhile, existing terminal-based strategies struggle due to their time-varying nature. To address these challenges, we propose the HRCD, a hybrid replica method based on community division. The HRCD first divides time-varying terminals into stable sets using the community division algorithm. Then, it employs fuzzy clustering analysis to select terminals with strong service capabilities for replica placement while utilizing uniform distribution to prioritize geographically local hotspot data as replica data. Extensive experiments demonstrate that the HRCD effectively reduces data access latency and decreases edge server load compared to other replica strategies. Overall, the HRCD offers a promising approach to optimizing replica placement in 6G edge computing environments. Full article
(This article belongs to the Section Cloud Continuum and Enabled Applications)
Show Figures

Figure 1

29 pages, 7553 KB  
Article
Optimization of Emergency Notification Processes in University Campuses Through Multiplatform Mobile Applications: A Case Study
by Steven Alejandro Salazar Cazco, Christian Alejandro Dávila Fuentes, Nelly Margarita Padilla Padilla, Rosa Belén Ramos Jiménez and Johanna Gabriela Del Pozo Naranjo
Computers 2025, 14(11), 453; https://doi.org/10.3390/computers14110453 - 22 Oct 2025
Viewed by 310
Abstract
Universities face continuous challenges in ensuring rapid and efficient communication during emergencies due to outdated, fragmented, and manual notification systems. This research presents the design, development, and implementation of a multiplatform mobile application to optimize emergency notifications at the Escuela Superior Politécnica de [...] Read more.
Universities face continuous challenges in ensuring rapid and efficient communication during emergencies due to outdated, fragmented, and manual notification systems. This research presents the design, development, and implementation of a multiplatform mobile application to optimize emergency notifications at the Escuela Superior Politécnica de Chimborazo (ESPOCH). The application, developed using the Flutter framework, offers real-time alert dispatch, geolocation services, and seamless integration with ESPOCH’s Security Unit through Application Programming Interfaces (APIs). A descriptive and applied research methodology was adopted, analyzing existing notification workflows and evaluating agile development methodologies. MOBILE-D was selected for its rapid iteration capabilities and alignment with small development teams. The application’s architecture incorporates a Node.js backend, Firebase Realtime Database, Google Maps API, and the ESPOCH Digital ID API for robust and scalable performance. Efficiency metrics were evaluated using ISO/IEC 25010 standards, focusing on temporal behavior. The results demonstrated a 53.92% reduction in response times compared to traditional notification processes, enhancing operational readiness and safety across the campus. This study underscores the importance of leveraging mobile technologies to streamline emergency communication and provides a scalable model for educational institutions seeking to modernize their security protocols. Full article
(This article belongs to the Section Human–Computer Interactions)
Show Figures

Figure 1

32 pages, 6188 KB  
Article
Siyasat: AI-Powered AI Governance Tool to Generate and Improve AI Policies According to Saudi AI Ethics Principles
by Dabiah Alboaneen, Shaikha Alhajri, Khloud Alhajri, Muneera Aljalal, Noura Alalyani, Hajer Alsaadan, Zainab Al Thonayan and Raja Alyafer
Computers 2025, 14(11), 452; https://doi.org/10.3390/computers14110452 - 22 Oct 2025
Viewed by 697
Abstract
The rapid development of artificial intelligence (AI) and growing reliance on generative AI (GenAI) tools such as ChatGPT and Bing Chat have raised concerns about risks, including privacy violations, bias, and discrimination. AI governance is viewed as a solution, and in Saudi Arabia, [...] Read more.
The rapid development of artificial intelligence (AI) and growing reliance on generative AI (GenAI) tools such as ChatGPT and Bing Chat have raised concerns about risks, including privacy violations, bias, and discrimination. AI governance is viewed as a solution, and in Saudi Arabia, the Saudi Data and Artificial Intelligence Authority (SDAIA) has introduced the AI Ethics Principles. However, many organizations face challenges in aligning their AI policies with these principles. This paper presents Siyasat, an Arabic web-based governance tool designed to generate and enhance AI policies based on SDAIA’s AI Ethics Principles. Powered by GPT-4-turbo and a Retrieval-Augmented Generation (RAG) approach, the tool uses a dataset of ten AI policies and SDAIA’s official ethics document. The results show that Siyasat achieved a BERTScore of 0.890 and Self-BLEU of 0.871 in generating AI policies, while in improving AI policies, it scored 0.870 and 0.980, showing strong consistency and quality. The paper contributes a practical solution to support public, private, and non-profit sectors in complying with Saudi Arabia’s AI Ethics Principles. Full article
Show Figures

Figure 1

30 pages, 3604 KB  
Article
Integrated Systems Ontology (ISOnto): Integrating Engineering Design and Operational Feedback for Dependable Systems
by Haytham Younus, Felician Campean, Sohag Kabir, Pascal Bonnaud and David Delaux
Computers 2025, 14(11), 451; https://doi.org/10.3390/computers14110451 - 22 Oct 2025
Viewed by 351
Abstract
This paper proposes an integrated ontological framework, Integrated Systems Ontology (ISOnto), for dependable systems engineering by semantically linking design models with real-world operational failure data. Building upon the recently proposed Function–Behaviour–Structure–Failure Modes (FBSFM) framework, ISOnto integrates early-stage design information with field-level evidence to [...] Read more.
This paper proposes an integrated ontological framework, Integrated Systems Ontology (ISOnto), for dependable systems engineering by semantically linking design models with real-world operational failure data. Building upon the recently proposed Function–Behaviour–Structure–Failure Modes (FBSFM) framework, ISOnto integrates early-stage design information with field-level evidence to support more informed, traceable, and dependable failure analysis. This extends the semantic scope of the FBSFM ontology to include operational/field feedback from warranty claims and technical inspections, enabling two-way traceability between design-phase assumptions (functions, behaviours, structures, and failure modes) and field-reported failures, causes, and effects. As a theoretical contribution, ISOnto introduces a formal semantic bridge between design and operational phases, strengthening the validation of known failure modes and the discovery of previously undocumented ones. Developed using established ontology engineering practices and formalised in OWL with Protégé, it incorporates domain-specific extensions to represent field data with structured mappings to design entities. A real-world automotive case study conducted with a global manufacturer demonstrates ISOnto’s ability to consolidate multisource lifecycle data into a coherent, machine-readable repository. The framework supports advanced reasoning, structured querying, and system-level traceability, thereby facilitating continuous improvement, data-driven validation, and more reliable decision-making across product development and reliability engineering. Full article
(This article belongs to the Special Issue Recent Trends in Dependable and High Availability Systems)
Show Figures

Figure 1

Previous Issue
Back to TopTop