Journal Description
Computers
Computers
is an international, scientific, peer-reviewed, open access journal of computer science, including computer and network architecture and computer–human interaction as its main foci, published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), dblp, Inspec, Ei Compendex, and other databases.
- Journal Rank: JCR - Q2 (Computer Science, Interdisciplinary Applications) / CiteScore - Q1 (Computer Science (miscellaneous))
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 16.3 days after submission; acceptance to publication is undertaken in 3.8 days (median values for papers published in this journal in the first half of 2025).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
4.2 (2024);
5-Year Impact Factor:
3.5 (2024)
Latest Articles
Network Data Flow Collection Methods for Cybersecurity: A Systematic Literature Review
Computers 2025, 14(10), 407; https://doi.org/10.3390/computers14100407 (registering DOI) - 24 Sep 2025
Abstract
Network flow collection has become a cornerstone of cyber defence, yet the literature still lacks a consolidated view of which technologies are effective across different environments and conditions. We conducted a systematic review of 362 publications indexed in six digital libraries between January
[...] Read more.
Network flow collection has become a cornerstone of cyber defence, yet the literature still lacks a consolidated view of which technologies are effective across different environments and conditions. We conducted a systematic review of 362 publications indexed in six digital libraries between January 2019 and July 2025, of which 51 met PRISMA 2020 eligibility criteria. All extraction materials are archived on OSF. NetFlow derivatives appear in 62.7% of the studies, IPFIX in 45.1%, INT/P4 or OpenFlow mirroring in 17.6%, and sFlow in 9.8%, with totals exceeding 100% because several papers evaluate multiple protocols. In total, 17 of the 51 studies (33.3%) tested production links of at least 40 Gbps, while others remained in laboratory settings. Fewer than half reported packet-loss thresholds or privacy controls, and none adopted a shared benchmark suite. These findings highlight trade-offs between throughput, fidelity, computational cost, and privacy, as well as gaps in encrypted-traffic support and GDPR-compliant anonymisation. Most importantly, our synthesis demonstrates that flow-collection methods directly shape what can be detected: some exporters are effective for volumetric attacks such as DDoS, while others enable visibility into brute-force authentication, botnets, or IoT malware. In other words, the choice of telemetry technology determines which threats and anomalous behaviours remain visible or hidden to defenders. By mapping technologies, metrics, and gaps, this review provides a single reference point for researchers, engineers, and regulators facing the challenges of flow-aware cybersecurity.
Full article
(This article belongs to the Section ICT Infrastructures for Cybersecurity)
►
Show Figures
Open AccessArticle
Benchmarking the Responsiveness of Open-Source Text-to-Speech Systems
by
Ha Pham Thien Dinh, Rutherford Agbeshi Patamia, Ming Liu and Akansel Cosgun
Computers 2025, 14(10), 406; https://doi.org/10.3390/computers14100406 - 23 Sep 2025
Abstract
Responsiveness—the speed at which a text-to-speech (TTS) system produces audible output—is critical for real-time voice assistants yet has received far less attention than perceptual quality metrics. Existing evaluations often touch on latency but do not establish reproducible, open-source standards that capture responsiveness as
[...] Read more.
Responsiveness—the speed at which a text-to-speech (TTS) system produces audible output—is critical for real-time voice assistants yet has received far less attention than perceptual quality metrics. Existing evaluations often touch on latency but do not establish reproducible, open-source standards that capture responsiveness as a first-class dimension. This work introduces a baseline benchmark designed to fill that gap. Our framework unifies latency distribution, tail latency, and intelligibility within a transparent and dataset-diverse pipeline, enabling a fair and replicable comparison across 13 widely used open-source TTS models. By grounding evaluation in structured input sets ranging from single words to sentence-length utterances and adopting a methodology inspired by standardized inference benchmarks, we capture both typical and worst-case user experiences. Unlike prior studies that emphasize closed or proprietary systems, our focus is on establishing open, reproducible baselines rather than ranking against commercial references. The results reveal substantial variability across architectures, with some models delivering near-instant responses while others fail to meet interactive thresholds. By centering evaluation on responsiveness and reproducibility, this study provides an infrastructural foundation for benchmarking TTS systems and lays the groundwork for more comprehensive assessments that integrate both fidelity and speed.
Full article
(This article belongs to the Special Issue Natural Language Processing (NLP) and Large Language Modelling (2nd Edition))
Open AccessArticle
Eghatha: A Blockchain-Based System to Enhance Disaster Preparedness
by
Ayoub Ghani, Ahmed Zinedine and Mohammed El Mohajir
Computers 2025, 14(10), 405; https://doi.org/10.3390/computers14100405 - 23 Sep 2025
Abstract
Natural disasters often strike unexpectedly, leaving thousands of victims and affected individuals each year. Effective disaster preparedness is critical to reducing these consequences and accelerating recovery. This paper presents Eghatha, a blockchain-based decentralized system designed to optimize humanitarian aid delivery during crises. By
[...] Read more.
Natural disasters often strike unexpectedly, leaving thousands of victims and affected individuals each year. Effective disaster preparedness is critical to reducing these consequences and accelerating recovery. This paper presents Eghatha, a blockchain-based decentralized system designed to optimize humanitarian aid delivery during crises. By enabling secure and transparent transfers of donations and relief from donors to beneficiaries, the system enhances trust and operational efficiency. All transactions are immutably recorded and verified on a blockchain network, reducing fraud and misuse while adapting to local contexts. The platform is volunteer-driven, coordinated by civil society organizations with humanitarian expertise, and supported by government agencies involved in disaster response. Eghatha’s design accounts for disaster-related constraints—including limited mobility, varying levels of technological literacy, and resource accessibility—by offering a user-friendly interface, support for local currencies, and integration with locally available technologies. These elements ensure inclusivity for diverse populations. Aligned with Morocco’s “Digital Morocco 2030” strategy, the system contributes to both immediate crisis response and long-term digital transformation. Its scalable architecture and contextual sensitivity position the platform for broader adoption in similarly affected regions worldwide, offering a practical model for ethical, decentralized, and resilient humanitarian logistics.
Full article
(This article belongs to the Special Issue Blockchain Technology—a Breakthrough Innovation for Modern Industries (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
Security-Aware Adaptive Video Streaming via Watermarking: Tackling Time-to-First-Byte Delays and QoE Issues in Live Video Delivery Systems
by
Reza Kalan, Peren Jerfi Canatalay and Emre Karsli
Computers 2025, 14(10), 404; https://doi.org/10.3390/computers14100404 - 23 Sep 2025
Abstract
Illegal broadcasting is one of the primary challenges for Over the Top (OTT) service providers. Watermarking is a method used to trace illegal redistribution of video content. However, watermarking introduces processing overhead due to the embedding of unique patterns into the video content,
[...] Read more.
Illegal broadcasting is one of the primary challenges for Over the Top (OTT) service providers. Watermarking is a method used to trace illegal redistribution of video content. However, watermarking introduces processing overhead due to the embedding of unique patterns into the video content, which results in additional latency. End-to-end network latency, caused by network congestion or heavy load on the origin server, can slow data transmission, impacting the time it takes for the segment to reach the client. This paper addresses 5xx errors (e.g., 503, 504) at the Content Delivery Network (CDN) in real-world video streaming platforms, which can negatively impact Quality of Experience (QoE), particularly when watermarking techniques are employed. To address the performance issues caused by the integration of watermarking technology, we enhanced the system architecture by introducing and optimizing a shield cache in front of the packager at the origin server and fine-tuning the CDN configuration. These optimizations significantly reduced the processing load on the packager, minimized latency, and improved overall content delivery. As a result, we achieved a 6% improvement in the Key Performance Indicator (KPI), reflecting enhanced system stability and video quality.
Full article
(This article belongs to the Special Issue Multimedia Data and Network Security)
►▼
Show Figures

Figure 1
Open AccessArticle
Beyond Opacity: Distributed Ledger Technology as a Catalyst for Carbon Credit Market Integrity
by
Stanton Heister, Felix Kin Peng Hui, David Ian Wilson and Yaakov Anker
Computers 2025, 14(9), 403; https://doi.org/10.3390/computers14090403 - 22 Sep 2025
Abstract
The 2015 Paris Agreement paved the way for the carbon trade economy, which has since evolved but has not attained a substantial magnitude. While carbon credit exchange is a critical mechanism for achieving global climate targets, it faces persistent challenges related to transparency,
[...] Read more.
The 2015 Paris Agreement paved the way for the carbon trade economy, which has since evolved but has not attained a substantial magnitude. While carbon credit exchange is a critical mechanism for achieving global climate targets, it faces persistent challenges related to transparency, double-counting, and verification. This paper examines how Distributed Ledger Technology (DLT) can address these limitations by providing immutable transaction records, automated verification through digitally encoded smart contracts, and increased market efficiency. To assess DLT’s strategic potential for leveraging the carbon markets and, more explicitly, whether its implementation can reduce transaction costs and enhance market integrity, three alternative approaches that apply DLT for carbon trading were taken as case studies. By comparing key elements in these DLT-based carbon credit platforms, it is elucidated that these proposed frameworks may be developed for a scalable global platform. The integration of existing compliance markets in the EU (case study 1), Australia (case study 2), and China (case study 3) can act as a standard for a global carbon trade establishment. The findings from these case studies suggest that while DLT offers a promising path toward more sustainable carbon markets, regulatory harmonization, standardization, and data transfer across platforms remain significant challenges.
Full article
(This article belongs to the Special Issue Blockchain Technology—a Breakthrough Innovation for Modern Industries (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
Optimising Contextual Embeddings for Meaning Conflation Deficiency Resolution in Low-Resourced Languages
by
Mosima A. Masethe, Sunday O. Ojo and Hlaudi D. Masethe
Computers 2025, 14(9), 402; https://doi.org/10.3390/computers14090402 - 22 Sep 2025
Abstract
►▼
Show Figures
Meaning conflation deficiency (MCD) presents a continual obstacle in natural language processing (NLP), especially for low-resourced and morphologically complex languages, where polysemy and contextual ambiguity diminish model precision in word sense disambiguation (WSD) tasks. This paper examines the optimisation of contextual embedding models,
[...] Read more.
Meaning conflation deficiency (MCD) presents a continual obstacle in natural language processing (NLP), especially for low-resourced and morphologically complex languages, where polysemy and contextual ambiguity diminish model precision in word sense disambiguation (WSD) tasks. This paper examines the optimisation of contextual embedding models, namely XLNet, ELMo, BART, and their improved variations, to tackle MCD in linguistic settings. Utilising Sesotho sa Leboa as a case study, researchers devised an enhanced XLNet architecture with specific hyperparameter optimisation, dynamic padding, early termination, and class-balanced training. Comparative assessments reveal that the optimised XLNet attains an accuracy of 91% and exhibits balanced precision–recall metrics of 92% and 91%, respectively, surpassing both its baseline counterpart and competing models. Optimised ELMo attained the greatest overall metrics (accuracy: 92%, F1-score: 96%), whilst optimised BART demonstrated significant accuracy improvements (96%) despite a reduced recall. The results demonstrate that fine-tuning contextual embeddings using MCD-specific methodologies significantly improves semantic disambiguation for under-represented languages. This study offers a scalable and flexible optimisation approach suitable for additional low-resource language contexts.
Full article

Figure 1
Open AccessArticle
Perceptual Image Hashing Fusing Zernike Moments and Saliency-Based Local Binary Patterns
by
Wei Li, Tingting Wang, Yajun Liu and Kai Liu
Computers 2025, 14(9), 401; https://doi.org/10.3390/computers14090401 - 21 Sep 2025
Abstract
►▼
Show Figures
This paper proposes a novel perceptual image hashing scheme that robustly combines global structural features with local texture information for image authentication. The method starts with image normalization and Gaussian filtering to ensure scale invariance and suppress noise. A saliency map is then
[...] Read more.
This paper proposes a novel perceptual image hashing scheme that robustly combines global structural features with local texture information for image authentication. The method starts with image normalization and Gaussian filtering to ensure scale invariance and suppress noise. A saliency map is then generated from a color vector angle matrix using a frequency-tuned model to identify perceptually significant regions. Local Binary Pattern (LBP) features are extracted from this map to represent fine-grained textures, while rotation-invariant Zernike moments are computed to capture global geometric structures. These local and global features are quantized and concatenated into a compact binary hash. Extensive experiments on standard databases show that the proposed method outperforms state-of-the-art algorithms in both robustness against content-preserving manipulations and discriminability across different images. Quantitative evaluations based on ROC curves and AUC values confirm its superior robustness–uniqueness trade-off, demonstrating the effectiveness of the saliency-guided fusion of Zernike moments and LBP for reliable image hashing.
Full article

Figure 1
Open AccessArticle
SemaTopic: A Framework for Semantic-Adaptive Probabilistic Topic Modeling
by
Amani Drissi, Salma Sassi, Richard Chbeir, Anis Tissaoui and Abderrazek Jemai
Computers 2025, 14(9), 400; https://doi.org/10.3390/computers14090400 - 19 Sep 2025
Abstract
Topic modeling is a crucial technique for Natural Language Processing (NLP) which helps to automatically uncover coherent topics from large-scale text corpora. Yet, classic methods tend to suffer from poor semantic depth and topic coherence. In this regard, we present here a new
[...] Read more.
Topic modeling is a crucial technique for Natural Language Processing (NLP) which helps to automatically uncover coherent topics from large-scale text corpora. Yet, classic methods tend to suffer from poor semantic depth and topic coherence. In this regard, we present here a new approach “ ” to improve the quality and interpretability of discovered topics. By exploiting semantic understanding and stronger clustering dynamics, our approach results in a more continuous, finer and more stable representation of the topics. Experimental results demonstrate that achieves a relative gain of +6.2% in semantic coherence compared to BERTopic on the 20 Newsgroups dataset ( vs. 0.5004), while maintaining stable performance across heterogeneous and multilingual corpora. These findings highlight “ ” as a scalable and reliable solution for practical text mining and knowledge discovery.
Full article
(This article belongs to the Special Issue Advances in Semantic Multimedia and Personalized Digital Content)
►▼
Show Figures

Figure 1
Open AccessArticle
Educational QA System-Oriented Answer Selection Model Based on Focus Fusion of Multi-Perspective Word Matching
by
Xiaoli Hu, Junfei He, Zhaoyu Shou, Ziming Liu and Huibing Zhang
Computers 2025, 14(9), 399; https://doi.org/10.3390/computers14090399 - 19 Sep 2025
Abstract
►▼
Show Figures
Question-answering systems have become an important tool for learning and knowledge acquisition. However, current answer selection models often rely on representing features using whole sentences, which leads to neglecting individual words and losing important information. To address this challenge, the paper proposes a
[...] Read more.
Question-answering systems have become an important tool for learning and knowledge acquisition. However, current answer selection models often rely on representing features using whole sentences, which leads to neglecting individual words and losing important information. To address this challenge, the paper proposes a novel answer selection model based on focus fusion of multi-perspective word matching. First, according to the different combination relationships between sentences, focus distribution in terms of words is obtained from the matching perspectives of serial, parallel, and transfer. Then, the sentence’s key position information is inferred from its focus distribution. Finally, a method of aligning key information points is designed to fuse the focus distribution for each perspective, which obtains match scores for each candidate answer to the question. Experimental results show that the proposed model significantly outperforms the Transformer encoder fine-tuned model based on contextual embedding, achieving a 4.07% and 5.51% increase in MAP and a 1.63% and 4.86% increase in MRR, respectively.
Full article

Figure 1
Open AccessArticle
Integration of Information and Communication Technology in Curriculum Practices: The Case of Preservice Accounting Teachers
by
Lineo Mphatsoane-Sesoane, Loyiso Currell Jita and Molaodi Tshelane
Computers 2025, 14(9), 398; https://doi.org/10.3390/computers14090398 - 19 Sep 2025
Abstract
This empirical paper explores South African preservice accounting teachers’ perceptions of ICT integration in secondary schools’ accounting curriculum practices. Since 2020, curriculum practices have been characterised by disruptions to traditional teaching and learning methods, including those brought on by the COVID-19 pandemic. Curriculum
[...] Read more.
This empirical paper explores South African preservice accounting teachers’ perceptions of ICT integration in secondary schools’ accounting curriculum practices. Since 2020, curriculum practices have been characterised by disruptions to traditional teaching and learning methods, including those brought on by the COVID-19 pandemic. Curriculum practices in accounting were not unnoticed. These sparked discussions about pedagogical changes, academic continuity, and the future of accounting curriculum practices. The theoretical framework used to guide the research process is connectivism. The theory is about forming connections between people and technology and teaching and learning in a connectivist learning environment. Connectivism promotes a lifelong learning perspective by training teachers and students to adapt to a fast-changing environment. An interpretive paradigm underpins this qualitative research paper. The data were collected from semi-structured interviews with five preservice accounting teachers about how they navigated pedagogy while switching to digital curriculum practices. Thematic analysis was used. The findings revealed that preservice accounting teachers faced challenges in ICT integration during school-based training, including limited resources, inadequate infrastructure, and insufficient hands-on training. While ICT tools enhanced learner engagement, barriers such as low digital skills and a lack of technical support hindered effective use. Participants highlighted a disconnect between theoretical training and classroom practice, prompting self-directed learning to bridge skill gaps. The study underscores the need for teacher education programs to provide practical, immersive ICT training to equip future educators for technology-driven classrooms.
Full article
Open AccessArticle
Development of an Early Lung Cancer Diagnosis Method Based on a Neural Network
by
Indira Karymsakova, Dinara Kozhakhmetova, Dariga Bekenova, Danila Ostroukh, Roza Bekbayeva, Lazat Kydyralina, Alina Bugubayeva and Dinara Kurushbayeva
Computers 2025, 14(9), 397; https://doi.org/10.3390/computers14090397 - 18 Sep 2025
Abstract
Cancer is one of the most lethal diseases in the modern world. Early diagnosis significantly contributes to prolonging the life expectancy of patients. The application of intelligent systems and AI methods is crucial for diagnosing oncological diseases. Primarily, expert systems or decision support
[...] Read more.
Cancer is one of the most lethal diseases in the modern world. Early diagnosis significantly contributes to prolonging the life expectancy of patients. The application of intelligent systems and AI methods is crucial for diagnosing oncological diseases. Primarily, expert systems or decision support systems are utilized in such cases. This research explores early lung cancer diagnosis through protocol-based questioning, considering the impact of nuclear testing factors. Nuclear tests conducted historically continue to affect citizens’ health. A classification of regions into five groups was proposed based on their proximity to nuclear test sites. The weighting coefficient was assigned accordingly, in proportion to the distance from the test zones. In this study, existing expert systems were analyzed and classified. Approaches used to build diagnostic expert systems for oncological diseases were grouped by how well they apply to different tumor localizations. An online questionnaire based on the lung cancer diagnostic protocol was created to gather input data for the neural network. To support this diagnostic method, a functional block diagram of the intelligent system “Oncology” was developed. The following methods were used to create the mathematical model: gradient boosting, multilayer perceptron, and Hamming network. Finally, a web application architecture for early lung cancer detection was proposed.
Full article
(This article belongs to the Special Issue AI in Its Ecosystem)
►▼
Show Figures

Figure 1
Open AccessArticle
AI Test Modeling for Computer Vision System—A Case Study
by
Jerry Gao and Radhika Agarwal
Computers 2025, 14(9), 396; https://doi.org/10.3390/computers14090396 - 18 Sep 2025
Abstract
This paper presents an intelligent AI test modeling framework for computer vision systems, focused on image-based systems. A three-dimensional (3D) model using decision tables enables model-based function testing, automated test data generation, and comprehensive coverage analysis. A case study using the Seek by
[...] Read more.
This paper presents an intelligent AI test modeling framework for computer vision systems, focused on image-based systems. A three-dimensional (3D) model using decision tables enables model-based function testing, automated test data generation, and comprehensive coverage analysis. A case study using the Seek by iNaturalist application demonstrates the framework’s applicability to real-world CV tasks. It effectively identifies species and non-species under varying image conditions such as distance, blur, brightness, and grayscale. This study contributes a structured methodology that advances our academic understanding of model-based CV testing while offering practical tools for improving the robustness and reliability of AI-driven vision applications.
Full article
(This article belongs to the Special Issue Advanced Image Processing and Computer Vision (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
Optimizing Teacher Portfolio Integrity with a Cost-Effective Smart Contract for School-Issued Teacher Documents
by
Diana Laura Silaghi, Andrada Cristina Artenie and Daniela Elena Popescu
Computers 2025, 14(9), 395; https://doi.org/10.3390/computers14090395 - 17 Sep 2025
Abstract
Diplomas and academic transcripts issued at the conclusion of a university cycle have been the subject of numerous studies focused on developing secure methods for their registration and access. However, in the context of high school teachers, these initial credentials mark only the
[...] Read more.
Diplomas and academic transcripts issued at the conclusion of a university cycle have been the subject of numerous studies focused on developing secure methods for their registration and access. However, in the context of high school teachers, these initial credentials mark only the starting point of a much more complex professional journey. Throughout their careers, teachers receive a wide array of certificates and attestations related to professional development, participation in educational projects, volunteering, and institutional contributions. Many of these documents are issued directly by the school administration and are often vulnerable to misplacement, unauthorized alterations, or limited portability. These challenges are amplified when teachers move between schools or are involved in teaching across multiple institutions. In response to this need, this paper proposes a blockchain-based solution built on the Ethereum platform, which ensures the integrity, traceability, and long-term accessibility of such records, preserving the professional achievements of teachers across their careers. Although most research has focused on securing highly valuable documents on blockchain, such as diplomas, certificates, and micro-credentials, this study highlights the importance of extending blockchain solutions to school-issued attestations, as they carry significant weight in teacher evaluation and the development of professional portfolios.
Full article
(This article belongs to the Special Issue Blockchain Technology—a Breakthrough Innovation for Modern Industries (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessReview
Fake News Detection Using Machine Learning and Deep Learning Algorithms: A Comprehensive Review and Future Perspectives
by
Faisal A. Alshuwaier and Fawaz A. Alsulaiman
Computers 2025, 14(9), 394; https://doi.org/10.3390/computers14090394 - 16 Sep 2025
Abstract
►▼
Show Figures
Currently, with significant developments in technology and social networks, people gain rapid access to news without focusing on its reliability. Consequently, the proportion of fake news has increased. Fake news is a significant problem that hinders societies today, as it negatively impacts many
[...] Read more.
Currently, with significant developments in technology and social networks, people gain rapid access to news without focusing on its reliability. Consequently, the proportion of fake news has increased. Fake news is a significant problem that hinders societies today, as it negatively impacts many aspects, including politics, the economy, and society. Fake news is widely disseminated via social media through modern digital platforms. In this paper, we focus on conducting a comprehensive review on fake news detection using machine learning and deep learning. Additionally, this review provides a brief survey and evaluation, as well as a discussion of gaps, and explores future perspectives. Through this research, this review addresses various research questions. This review also focuses on the importance of machine learning and deep learning for fake news detection, by providing a comparison and discussion of how they are used to detect fake news. The results of the review, presented between 2018 and 2025, with the most commonly used publishers being IEEE, Intelligent Systems, EMNLP, ACM, Springer, Elsevier, JAIR, and others, can be used to determine the most effective algorithm in terms of performance. Therefore, articles that did not demonstrate the use of algorithms or performance were excluded.
Full article

Figure 1
Open AccessArticle
Secret Sharing Scheme with Share Verification Capability
by
Nursulu Kapalova, Armanbek Haumen and Kunbolat Algazy
Computers 2025, 14(9), 393; https://doi.org/10.3390/computers14090393 - 16 Sep 2025
Abstract
This paper examines the properties of classical secret sharing schemes used in information protection systems, including the protection of valuable and confidential data. It addresses issues such as implementation complexity, limited flexibility, vulnerability to new types of attacks, the requirements for such schemes,
[...] Read more.
This paper examines the properties of classical secret sharing schemes used in information protection systems, including the protection of valuable and confidential data. It addresses issues such as implementation complexity, limited flexibility, vulnerability to new types of attacks, the requirements for such schemes, and analyzes existing approaches to their solutions. A new secret sharing scheme is proposed as a potential solution to these challenges. The developed scheme is based on multivariable functions. The shares distributed among participants represent the values of these functions. Secret reconstruction is reduced to solving a system of linear equations composed of such functions. The structure and mathematical foundation of the scheme are presented, along with an analysis of its properties. A key feature of the proposed scheme is the incorporation of functions aimed at authenticating participants and verifying the integrity of the distributed shares. The paper also provides a cryptanalysis of the scheme, evaluates its resistance to various types of attacks, and discusses the results obtained. Thus, this work contributes to the advancement of information security methods by offering a modern and reliable solution for the secure storage and joint use of secret data.
Full article
(This article belongs to the Section ICT Infrastructures for Cybersecurity)
Open AccessArticle
SeismicNoiseAnalyzer: A Deep-Learning Tool for Automatic Quality Control of Seismic Stations
by
Alessandro Pignatelli, Paolo Casale, Veronica Vignoli and Flavia Tavani
Computers 2025, 14(9), 392; https://doi.org/10.3390/computers14090392 - 16 Sep 2025
Abstract
►▼
Show Figures
SeismicNoiseAnalyzer 1.0 is a software tool designed to automatically assess the quality of seismic stations through the classification of spectral diagrams. By leveraging convolutional neural networks trained on expert-labeled data, the software emulates human visual inspection of probability density function (PDF) plots. It
[...] Read more.
SeismicNoiseAnalyzer 1.0 is a software tool designed to automatically assess the quality of seismic stations through the classification of spectral diagrams. By leveraging convolutional neural networks trained on expert-labeled data, the software emulates human visual inspection of probability density function (PDF) plots. It supports both individual image analysis and batch processing from compressed archives, providing detailed reports that summarize station health. Two classification networks are available: a binary model that distinguishes between working and malfunctioning stations and a ternary model that introduces an intermediate “doubtful” category to capture ambiguous cases. The system demonstrates high agreement with expert evaluations and enables efficient instrumentation control across large seismic networks. Its intuitive graphical interface and automated workflow make it a valuable tool for routine monitoring and data validation.
Full article

Graphical abstract
Open AccessArticle
Lightweight Embedded IoT Gateway for Smart Homes Based on an ESP32 Microcontroller
by
Filippos Serepas, Ioannis Papias, Konstantinos Christakis, Nikos Dimitropoulos and Vangelis Marinakis
Computers 2025, 14(9), 391; https://doi.org/10.3390/computers14090391 - 16 Sep 2025
Abstract
The rapid expansion of the Internet of Things (IoT) demands scalable, efficient, and user-friendly gateway solutions that seamlessly connect resource-constrained edge devices to cloud services. Low-cost, widely available microcontrollers, such as the ESP32 and its ecosystem peers, offer integrated Wi-Fi/Bluetooth connectivity, low power
[...] Read more.
The rapid expansion of the Internet of Things (IoT) demands scalable, efficient, and user-friendly gateway solutions that seamlessly connect resource-constrained edge devices to cloud services. Low-cost, widely available microcontrollers, such as the ESP32 and its ecosystem peers, offer integrated Wi-Fi/Bluetooth connectivity, low power consumption, and a mature developer toolchain at a bill of materials cost of only a few dollars. For smart-home deployments where budgets, energy consumption, and maintainability are critical, these characteristics make MCU-class gateways a pragmatic alternative to single-board computers, enabling always-on local control with minimal overhead. This paper presents the design and implementation of an embedded IoT gateway powered by the ESP32 microcontroller. By using lightweight communication protocols such as Message Queuing Telemetry Transport (MQTT) and REST APIs, the proposed architecture supports local control, distributed intelligence, and secure on-site data storage, all while minimizing dependence on cloud infrastructure. A real-world deployment in an educational building demonstrates the gateway’s capability to monitor energy consumption, execute control commands, and provide an intuitive web-based dashboard with minimal resource overhead. Experimental results confirm that the solution offers strong performance, with RAM usage ranging between 3.6% and 6.8% of available memory (approximately 8.92 KB to 16.9 KB). The initial loading of the single-page application (SPA) results in a temporary RAM spike to 52.4%, which later stabilizes at 50.8%. These findings highlight the ESP32’s ability to serve as a functional IoT gateway with minimal resource demands. Areas for future optimization include improved device discovery mechanisms and enhanced resource management to prolong device longevity. Overall, the gateway represents a cost-effective and vendor-agnostic platform for building resilient and scalable IoT ecosystems.
Full article
(This article belongs to the Section Internet of Things (IoT) and Industrial IoT)
►▼
Show Figures

Figure 1
Open AccessReview
Optimizing Kubernetes with Multi-Objective Scheduling Algorithms: A 5G Perspective
by
Mazen Farid, Heng Siong Lim, Chin Poo Lee, Charilaos C. Zarakovitis and Su Fong Chien
Computers 2025, 14(9), 390; https://doi.org/10.3390/computers14090390 - 15 Sep 2025
Abstract
►▼
Show Figures
This review provides an in-depth examination of multi-objective scheduling algorithms within 5G networks, with a particular focus on Kubernetes-based container orchestration. As 5G systems evolve, efficient resource allocation and the optimization of Quality-of-Service (QoS) metrics, including response time, energy efficiency, scalability, and resource
[...] Read more.
This review provides an in-depth examination of multi-objective scheduling algorithms within 5G networks, with a particular focus on Kubernetes-based container orchestration. As 5G systems evolve, efficient resource allocation and the optimization of Quality-of-Service (QoS) metrics, including response time, energy efficiency, scalability, and resource utilization, have become increasingly critical. Given the scheduler’s central role in orchestrating containerized workloads, this study analyzes diverse scheduling strategies designed to address these competing objectives. A novel taxonomy is introduced to categorize existing approaches, offering a structured view of deterministic, heuristic, and learning-based methods. Furthermore, the review identifies key research challenges, highlights open issues, such as QoS-aware orchestration and resilience in distributed environments, and outlines prospective directions to advance multi-objective scheduling in Kubernetes for next-generation networks. By synthesizing current knowledge and mapping research gaps, this work aims to provide both a foundation for newcomers and a practical reference for advancing scholarly and industrial efforts in the field.
Full article

Figure 1
Open AccessArticle
DCGAN Feature-Enhancement-Based YOLOv8n Model in Small-Sample Target Detection
by
Peng Zheng, Yun Cheng, Wei Zhu, Bo Liu, Chenhao Ye, Shijie Wang, Shuhong Liu and Jinyin Bai
Computers 2025, 14(9), 389; https://doi.org/10.3390/computers14090389 - 15 Sep 2025
Abstract
This paper proposes DCGAN-YOLOv8n, an integrated framework that significantly advances small-sample target detection by synergizing generative adversarial feature enhancement with multi-scale representation learning. The model’s core contribution lies in its novel adversarial feature enhancement module (AFEM), which leverages conditional generative adversarial networks to
[...] Read more.
This paper proposes DCGAN-YOLOv8n, an integrated framework that significantly advances small-sample target detection by synergizing generative adversarial feature enhancement with multi-scale representation learning. The model’s core contribution lies in its novel adversarial feature enhancement module (AFEM), which leverages conditional generative adversarial networks to reconstruct discriminative multi-scale features while effectively mitigating mode collapse. Furthermore, the architecture incorporates a deformable multi-scale feature pyramid that dynamically fuses generated high-resolution features with hierarchical semantic representations through an attention mechanism. The proposed triple marginal constraint optimization jointly enhances intra-class compactness and inter-class separation, thereby structuring a highly discriminative feature space. Extensive experiments on the NWPU VHR-10 dataset demonstrate state-of-the-art performance, with the model achieving an mAP50 of 90.46% and an mAP50-95 of 57.06%, representing significant improvements of 4.52% and 4.08% over the baseline YOLOv8n, respectively. These results validate the framework’s effectiveness in addressing critical challenges of feature representation scarcity and cross-scale adaptation in data-limited scenarios.
Full article
(This article belongs to the Special Issue Machine Learning Applications in Pattern Recognition)
►▼
Show Figures

Figure 1
Open AccessArticle
Integrative Federated Learning Framework for Multimodal Parkinson’s Disease Biomarker Fusion
by
Ruchira Pratihar and Ravi Sankar
Computers 2025, 14(9), 388; https://doi.org/10.3390/computers14090388 - 15 Sep 2025
Abstract
Accurate and early diagnosis of Parkinson’s Disease (PD) is challenged by the diverse manifestations of motor and non-motor symptoms across different patients. Existing studies largely rely on limited datasets and biomarkers. In this extended research, we propose a comprehensive Federated Learning (FL) framework
[...] Read more.
Accurate and early diagnosis of Parkinson’s Disease (PD) is challenged by the diverse manifestations of motor and non-motor symptoms across different patients. Existing studies largely rely on limited datasets and biomarkers. In this extended research, we propose a comprehensive Federated Learning (FL) framework designed to integrate heterogeneous biomarkers through multimodal combinations—such as EEG–fMRI pairs, continuous speech with vowel pronunciation, and the fusion of EEG, gait, and accelerometry data—drawn from diverse sources and modalities. By processing data separately at client nodes and performing feature and decision fusion at a central server, our method preserves privacy and enables robust PD classification. Experimental results show accuracies exceeding 85% across multiple fusion techniques, with attention-based fusion reaching 97.8% for Freezing of Gait (FoG) detection. Our framework advances scalable, privacy-preserving, multimodal diagnostics for PD.
Full article
(This article belongs to the Special Issue Application of Artificial Intelligence and Modeling Frameworks in Health Informatics and Related Fields)
►▼
Show Figures

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Animals, Computers, Information, J. Imaging, Veterinary Sciences
AI, Deep Learning, and Machine Learning in Veterinary Science Imaging
Topic Editors: Vitor Filipe, Lio Gonçalves, Mário GinjaDeadline: 31 October 2025
Topic in
Applied Sciences, Computers, Electronics, JSAN, Technologies
Emerging AI+X Technologies and Applications
Topic Editors: Byung-Seo Kim, Hyunsik Ahn, Kyu-Tae LeeDeadline: 31 December 2025
Topic in
Applied Sciences, Computers, Entropy, Information, MAKE, Systems
Opportunities and Challenges in Explainable Artificial Intelligence (XAI)
Topic Editors: Luca Longo, Mario Brcic, Sebastian LapuschkinDeadline: 31 January 2026
Topic in
AI, Computers, Education Sciences, Societies, Future Internet, Technologies
AI Trends in Teacher and Student Training
Topic Editors: José Fernández-Cerero, Marta Montenegro-RuedaDeadline: 11 March 2026

Conferences
Special Issues
Special Issue in
Computers
Present and Future of E-Learning Technologies (2nd Edition)
Guest Editor: Antonio Sarasa CabezueloDeadline: 30 September 2025
Special Issue in
Computers
Wireless Sensor Network, IoT and Cloud Computing Technologies for Smart Cities
Guest Editor: Lilatul FerdouseDeadline: 30 September 2025
Special Issue in
Computers
Applications of Machine Learning and Artificial Intelligence for Healthcare
Guest Editor: Elias DritsasDeadline: 30 September 2025
Special Issue in
Computers
Artificial Intelligence in Control
Guest Editors: Mads Sloth Vinding, Ivan Maximov, Christoph AignerDeadline: 30 September 2025