Journal Description
Computers
Computers
is an international, scientific, peer-reviewed, open access journal of computer science, including computer and network architecture and computer–human interaction as its main foci, published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), dblp, Inspec, and other databases.
- Journal Rank: CiteScore - Q2 (Computer Networks and Communications)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 17.9 days after submission; acceptance to publication is undertaken in 2.9 days (median values for papers published in this journal in the first half of 2023).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
2.8 (2022);
5-Year Impact Factor:
2.6 (2022)
Latest Articles
Exploring the Potential of Distributed Computing Continuum Systems
Computers 2023, 12(10), 198; https://doi.org/10.3390/computers12100198 - 02 Oct 2023
Abstract
Computing paradigms have evolved significantly in recent decades, moving from large room-sized resources (processors and memory) to incredibly small computing nodes. Recently, the power of computing has attracted almost all current application fields. Currently, distributed computing continuum systems (DCCSs) are unleashing the era
[...] Read more.
Computing paradigms have evolved significantly in recent decades, moving from large room-sized resources (processors and memory) to incredibly small computing nodes. Recently, the power of computing has attracted almost all current application fields. Currently, distributed computing continuum systems (DCCSs) are unleashing the era of a computing paradigm that unifies various computing resources, including cloud, fog/edge computing, the Internet of Things (IoT), and mobile devices into a seamless and integrated continuum. Its seamless infrastructure efficiently manages diverse processing loads and ensures a consistent user experience. Furthermore, it provides a holistic solution to meet modern computing needs. In this context, this paper presents a deeper understanding of DCCSs’ potential in today’s computing environment. First, we discuss the evolution of computing paradigms up to DCCS. The general architectures, components, and various computing devices are discussed, and the benefits and limitations of each computing paradigm are analyzed. After that, our discussion continues into various computing devices that constitute part of DCCS to achieve computational goals in current and futuristic applications. In addition, we delve into the key features and benefits of DCCS from the perspective of current computing needs. Furthermore, we provide a comprehensive overview of emerging applications (with a case study analysis) that desperately need DCCS architectures to perform their tasks. Finally, we describe the open challenges and possible developments that need to be made to DCCS to unleash its widespread potential for the majority of applications.
Full article
(This article belongs to the Special Issue Artificial Intelligence in Industrial IoT Applications)
►
Show Figures
Open AccessArticle
Comparison of Automated Machine Learning (AutoML) Tools for Epileptic Seizure Detection Using Electroencephalograms (EEG)
Computers 2023, 12(10), 197; https://doi.org/10.3390/computers12100197 - 29 Sep 2023
Abstract
Epilepsy is a neurological disease characterized by recurrent seizures caused by abnormal electrical activity in the brain. One of the methods used to diagnose epilepsy is through electroencephalogram (EEG) analysis. EEG is a non-invasive medical test for quantifying electrical activity in the brain.
[...] Read more.
Epilepsy is a neurological disease characterized by recurrent seizures caused by abnormal electrical activity in the brain. One of the methods used to diagnose epilepsy is through electroencephalogram (EEG) analysis. EEG is a non-invasive medical test for quantifying electrical activity in the brain. Applying machine learning (ML) to EEG data for epilepsy diagnosis has the potential to be more accurate and efficient. However, expert knowledge is required to set up the ML model with correct hyperparameters. Automated machine learning (AutoML) tools aim to make ML more accessible to non-experts and automate many ML processes to create a high-performing ML model. This article explores the use of automated machine learning (AutoML) tools for diagnosing epilepsy using electroencephalogram (EEG) data. The study compares the performance of three different AutoML tools, AutoGluon, Auto-Sklearn, and Amazon Sagemaker, on three different datasets from the UC Irvine ML Repository, Bonn EEG time series dataset, and Zenodo. Performance measures used for evaluation include accuracy, F1 score, recall, and precision. The results show that all three AutoML tools were able to generate high-performing ML models for the diagnosis of epilepsy. The generated ML models perform better when the training dataset is larger in size. Amazon Sagemaker and Auto-Sklearn performed better with smaller datasets. This is the first study to compare several AutoML tools and shows that AutoML tools can be utilized to create well-performing solutions for the diagnosis of epilepsy via processing hard-to-analyze EEG timeseries data.
Full article
(This article belongs to the Special Issue Artificial Intelligence in Control)
►▼
Show Figures

Figure 1
Open AccessArticle
An Improved Dandelion Optimizer Algorithm for Spam Detection: Next-Generation Email Filtering System
Computers 2023, 12(10), 196; https://doi.org/10.3390/computers12100196 - 28 Sep 2023
Abstract
Spam emails have become a pervasive issue in recent years, as internet users receive increasing amounts of unwanted or fake emails. To combat this issue, automatic spam detection methods have been proposed, which aim to classify emails into spam and non-spam categories. Machine
[...] Read more.
Spam emails have become a pervasive issue in recent years, as internet users receive increasing amounts of unwanted or fake emails. To combat this issue, automatic spam detection methods have been proposed, which aim to classify emails into spam and non-spam categories. Machine learning techniques have been utilized for this task with considerable success. In this paper, we introduce a novel approach to spam email detection by presenting significant advancements to the Dandelion Optimizer (DO) algorithm. The DO is a relatively new nature-inspired optimization algorithm inspired by the flight of dandelion seeds. While the DO shows promise, it faces challenges, especially in high-dimensional problems such as feature selection for spam detection. Our primary contributions focus on enhancing the DO algorithm. Firstly, we introduce a new local search algorithm based on flipping (LSAF), designed to improve the DO’s ability to find the best solutions. Secondly, we propose a reduction equation that streamlines the population size during algorithm execution, reducing computational complexity. To showcase the effectiveness of our modified DO algorithm, which we refer to as the Improved DO (IDO), we conduct a comprehensive evaluation using the Spam base dataset from the UCI repository. However, we emphasize that our primary objective is to advance the DO algorithm, with spam email detection serving as a case study application. Comparative analysis against several popular algorithms, including Particle Swarm Optimization (PSO), the Genetic Algorithm (GA), Generalized Normal Distribution Optimization (GNDO), the Chimp Optimization Algorithm (ChOA), the Grasshopper Optimization Algorithm (GOA), Ant Lion Optimizer (ALO), and the Dragonfly Algorithm (DA), demonstrates the superior performance of our proposed IDO algorithm. It excels in accuracy, fitness, and the number of selected features, among other metrics. Our results clearly indicate that the IDO overcomes the local optima problem commonly associated with the standard DO algorithm, owing to the incorporation of LSAF and the reduction in equation methods. In summary, our paper underscores the significant advancement made in the form of the IDO algorithm, which represents a promising approach for solving high-dimensional optimization problems, with a keen focus on practical applications in real-world systems. While we employ spam email detection as a case study, our primary contribution lies in the improved DO algorithm, which is efficient, accurate, and outperforms several state-of-the-art algorithms in various metrics. This work opens avenues for enhancing optimization techniques and their applications in machine learning.
Full article
(This article belongs to the Topic Modeling and Practice for Trustworthy and Secure Systems)
►▼
Show Figures

Figure 1
Open AccessArticle
Rapidrift: Elementary Techniques to Improve Machine Learning-Based Malware Detection
Computers 2023, 12(10), 195; https://doi.org/10.3390/computers12100195 - 28 Sep 2023
Abstract
Artificial intelligence and machine learning have become a necessary part of modern living along with the increased adoption of new computational devices. Because machine learning and artificial intelligence can detect malware better than traditional signature detection, the development of new and novel malware
[...] Read more.
Artificial intelligence and machine learning have become a necessary part of modern living along with the increased adoption of new computational devices. Because machine learning and artificial intelligence can detect malware better than traditional signature detection, the development of new and novel malware aiming to bypass detection has caused a challenge where models may experience concept drift. However, as new malware samples appear, the detection performance drops. Our work aims to discuss the performance degradation of machine learning-based malware detectors with time, also called concept drift. To achieve this goal, we develop a Python-based framework, namely Rapidrift, capable of analysing the concept drift at a more granular level. We also created two new malware datasets, TRITIUM and INFRENO, from different sources and threat profiles to conduct a deeper analysis of the concept drift problem. To test the effectiveness of Rapidrift, various fundamental methods that could reduce the effects of concept drift were experimentally explored.
Full article
(This article belongs to the Special Issue Software-Defined Internet of Everything)
►▼
Show Figures

Figure 1
Open AccessArticle
Predictive Modeling of Student Dropout in MOOCs and Self-Regulated Learning
Computers 2023, 12(10), 194; https://doi.org/10.3390/computers12100194 - 27 Sep 2023
Abstract
The primary objective of this study is to examine the factors that contribute to the early prediction of Massive Open Online Courses (MOOCs) dropouts in order to identify and support at-risk students. We utilize MOOC data of specific duration, with a guided study
[...] Read more.
The primary objective of this study is to examine the factors that contribute to the early prediction of Massive Open Online Courses (MOOCs) dropouts in order to identify and support at-risk students. We utilize MOOC data of specific duration, with a guided study pace. The dataset exhibits class imbalance, and we apply oversampling techniques to ensure data balancing and unbiased prediction. We examine the predictive performance of five classic classification machine learning (ML) algorithms under four different oversampling techniques and various evaluation metrics. Additionally, we explore the influence of self-reported self-regulated learning (SRL) data provided by students and various other prominent features of MOOCs as potential indicators of early stage dropout prediction. The research questions focus on (1) the performance of the classic classification ML models using various evaluation metrics before and after different methods of oversampling, (2) which self-reported data may constitute crucial predictors for dropout propensity, and (3) the effect of the SRL factor on the dropout prediction performance. The main conclusions are: (1) prominent predictors, including employment status, frequency of chat tool usage, prior subject-related experiences, gender, education, and willingness to participate, exhibit remarkable efficacy in achieving high to excellent recall performance, particularly when specific combinations of algorithms and oversampling methods are applied, (2) self-reported SRL factor, combined with easily provided/self-reported features, performed well as a predictor in terms of recall when LR and SVM algorithms were employed, (3) it is crucial to test diverse machine learning algorithms and oversampling methods in predictive modeling.
Full article
(This article belongs to the Special Issue Recent Advances in Computer-Assisted Learning)
►▼
Show Figures

Figure 1
Open AccessArticle
Prospective ICT Teachers’ Perceptions on the Didactic Utility and Player Experience of a Serious Game for Safe Internet Use and Digital Intelligence Competencies
Computers 2023, 12(10), 193; https://doi.org/10.3390/computers12100193 - 26 Sep 2023
Abstract
Nowadays, young students spend a lot of time playing video games and browsing on the Internet. Using the Internet has become even more widespread for young students due to the COVID-19 pandemic lockdown, which resulted in transferring several educational activities online. The Internet
[...] Read more.
Nowadays, young students spend a lot of time playing video games and browsing on the Internet. Using the Internet has become even more widespread for young students due to the COVID-19 pandemic lockdown, which resulted in transferring several educational activities online. The Internet and generally the digital world that we live in offers many possibilities in our everyday lives, but it also entails dangers such as cyber threats and unethical use of personal data. It is widely accepted that everyone, especially young students, should be educated on safe Internet use and should be supported on acquiring other Digital Intelligence (DI) competencies as well. Towards this goal, we present the design and evaluation of the game “Follow the Paws” that aims to educate primary school students on safe Internet use and support them in acquiring relevant DI competencies. The game was designed taking into account relevant literature and was evaluated by 213 prospective Information and Communication Technology (ICT) teachers. The participants playtested the game and evaluated it through an online questionnaire that was based on validated instruments proposed in the literature. The participants evaluated positively to the didactic utility of the game and the anticipated player experience, while they highlighted several improvements to be taken into consideration in a future revision of the game. Based on the results, proposals for further research are presented, including DI competencies detection through the game and evaluating its actual effectiveness in the classroom.
Full article
(This article belongs to the Special Issue Game-Based Learning, Gamification in Education and Serious Games 2023)
►▼
Show Figures

Figure 1
Open AccessArticle
Explain Trace: Misconceptions of Control-Flow Statements
by
and
Computers 2023, 12(10), 192; https://doi.org/10.3390/computers12100192 - 24 Sep 2023
Abstract
Control-flow statements often cause misunderstandings among novice computer science students. To better address these problems, teachers need to know the misconceptions that are typical at this stage. In this paper, we present the results of studying students’ misconceptions about control-flow statements. We compiled
[...] Read more.
Control-flow statements often cause misunderstandings among novice computer science students. To better address these problems, teachers need to know the misconceptions that are typical at this stage. In this paper, we present the results of studying students’ misconceptions about control-flow statements. We compiled 181 questions, each containing an algorithm written in pseudocode and the execution trace of that algorithm. Some of the traces were correct; others contained highlighted errors. The students were asked to explain in their own words why the selected line of the trace was correct or erroneous. We collected and processed 10,799 answers from 67 CS1 students. Among the 24 misconceptions we found, 6 coincided with misconceptions from other studies, and 7 were narrower cases of known misconceptions. We did not find previous research regarding 11 of the misconceptions we identified.
Full article
(This article belongs to the Special Issue Future Trends in Computer Programming Education)
►▼
Show Figures

Figure 1
Open AccessCommunication
Analyzing Public Reactions, Perceptions, and Attitudes during the MPox Outbreak: Findings from Topic Modeling of Tweets
Computers 2023, 12(10), 191; https://doi.org/10.3390/computers12100191 - 23 Sep 2023
Abstract
In the last decade and a half, the world has experienced outbreaks of a range of viruses such as COVID-19, H1N1, flu, Ebola, Zika virus, Middle East Respiratory Syndrome (MERS), measles, and West Nile virus, just to name a few. During these virus
[...] Read more.
In the last decade and a half, the world has experienced outbreaks of a range of viruses such as COVID-19, H1N1, flu, Ebola, Zika virus, Middle East Respiratory Syndrome (MERS), measles, and West Nile virus, just to name a few. During these virus outbreaks, the usage and effectiveness of social media platforms increased significantly, as such platforms served as virtual communities, enabling their users to share and exchange information, news, perspectives, opinions, ideas, and comments related to the outbreaks. Analysis of this Big Data of conversations related to virus outbreaks using concepts of Natural Language Processing such as Topic Modeling has attracted the attention of researchers from different disciplines such as Healthcare, Epidemiology, Data Science, Medicine, and Computer Science. The recent outbreak of the MPox virus has resulted in a tremendous increase in the usage of Twitter. Prior works in this area of research have primarily focused on the sentiment analysis and content analysis of these Tweets, and the few works that have focused on topic modeling have multiple limitations. This paper aims to address this research gap and makes two scientific contributions to this field. First, it presents the results of performing Topic Modeling on 601,432 Tweets about the 2022 Mpox outbreak that were posted on Twitter between 7 May 2022 and 3 March 2023. The results indicate that the conversations on Twitter related to Mpox during this time range may be broadly categorized into four distinct themes—Views and Perspectives about Mpox, Updates on Cases and Investigations about Mpox, Mpox and the LGBTQIA+ Community, and Mpox and COVID-19. Second, the paper presents the findings from the analysis of these Tweets. The results show that the theme that was most popular on Twitter (in terms of the number of Tweets posted) during this time range was Views and Perspectives about Mpox. This was followed by the theme of Mpox and the LGBTQIA+ Community, which was followed by the themes of Mpox and COVID-19 and Updates on Cases and Investigations about Mpox, respectively. Finally, a comparison with related studies in this area of research is also presented to highlight the novelty and significance of this research work.
Full article
(This article belongs to the Special Issue When Natural Language Processing Meets Machine Learning— Opportunities, Challenges and Solutions)
►▼
Show Figures

Figure 1
Open AccessArticle
Model and Fuzzy Controller Design Approaches for Stability of Modern Robot Manipulators
Computers 2023, 12(10), 190; https://doi.org/10.3390/computers12100190 - 23 Sep 2023
Abstract
Robotics is a crucial technology of Industry 4.0 that offers a diverse array of applications in the industrial sector. However, the quality of a robot’s manipulator is contingent on its stability, which is a function of the manipulator’s parameters. In previous studies, stability
[...] Read more.
Robotics is a crucial technology of Industry 4.0 that offers a diverse array of applications in the industrial sector. However, the quality of a robot’s manipulator is contingent on its stability, which is a function of the manipulator’s parameters. In previous studies, stability has been evaluated based on a small number of manipulator parameters; as a result, there is not much information about the integration/optimal arrangement/combination of manipulator parameters toward stability. Through Lagrangian mechanics and the consideration of multiple parameters, a mathematical model of a modern manipulator is developed in this study. In this mathematical model, motor acceleration, moment of inertia, and deflection are considered in order to assess the level of stability of the ABB Robot manipulator of six degrees of freedom. A novel mathematical approach to stability is developed in which stability is correlated with motor acceleration, moment of inertia, and deflection. In addition to this, fuzzy logic inference principles are employed to determine the status of stability. The numerical data of different manipulator parameters are verified using mathematical approaches. Results indicated that as motor acceleration increases, stability increases, while stability decreases as moment of inertia and deflection increase. It is anticipated that the implementation of these findings will increase industrial output.
Full article
(This article belongs to the Special Issue Artificial Intelligence in Control)
►▼
Show Figures

Figure 1
Open AccessArticle
Implementing Tensor-Organized Memory for Message Retrieval Purposes in Neuromorphic Chips
Computers 2023, 12(10), 189; https://doi.org/10.3390/computers12100189 - 22 Sep 2023
Abstract
►▼
Show Figures
This paper introduces Tensor-Organized Memory (TOM), a novel neuromorphic architecture inspired by the human brain’s structural and functional principles. Utilizing spike-timing-dependent plasticity (STDP) and Hebbian rules, TOM exhibits cognitive behaviors similar to the human brain. Compared to conventional architectures using a simplified leaky
[...] Read more.
This paper introduces Tensor-Organized Memory (TOM), a novel neuromorphic architecture inspired by the human brain’s structural and functional principles. Utilizing spike-timing-dependent plasticity (STDP) and Hebbian rules, TOM exhibits cognitive behaviors similar to the human brain. Compared to conventional architectures using a simplified leaky integrate-and-fire (LIF) neuron model, TOM showcases robust performance, even in noisy conditions. TOM’s adaptability and unique organizational structure, rooted in the Columnar-Organized Memory (COM) framework, position it as a transformative digital memory processing solution. Innovative neural architecture, advanced recognition mechanisms, and integration of synaptic plasticity rules enhance TOM’s cognitive capabilities. We have compared the TOM architecture with a conventional floating-point architecture, using a simplified LIF neuron model. We also implemented tests with varying noise levels and partially erased messages to evaluate its robustness. Despite the slight degradation in performance with noisy messages beyond 30%, the TOM architecture exhibited appreciable performance under less-than-ideal conditions. This exploration into the TOM architecture reveals its potential as a framework for future neuromorphic systems. This study lays the groundwork for future applications in implementing neuromorphic chips for high-performance intelligent edge devices, thereby revolutionizing industries and enhancing user experiences within the power of artificial intelligence.
Full article

Figure 1
Open AccessArticle
Evaluating Video Games as Tools for Education on Fake News and Misinformation
Computers 2023, 12(9), 188; https://doi.org/10.3390/computers12090188 - 21 Sep 2023
Abstract
Despite access to reliable information being essential for equal opportunities in our society, current school curricula only include some notions about media literacy in a limited context. Thus, it is necessary to create scenarios for reflection on and a well-founded analysis of misinformation.
[...] Read more.
Despite access to reliable information being essential for equal opportunities in our society, current school curricula only include some notions about media literacy in a limited context. Thus, it is necessary to create scenarios for reflection on and a well-founded analysis of misinformation. Video games may be an effective approach to foster these skills and can seamlessly integrate learning content into their design, enabling achieving multiple learning outcomes and building competencies that can transfer to real-life situations. We analyzed 24 video games about media literacy by studying their content, design, and characteristics that may affect their implementation in learning settings. Even though not all learning outcomes considered were equally addressed, the results show that media literacy video games currently on the market could be used as effective tools to achieve critical learning goals and may allow users to understand, practice, and implement skills to fight misinformation, regardless of their complexity in terms of game mechanics. However, we detected that certain characteristics of video games may affect their implementation in learning environments, such as their availability, estimated playing time, approach, or whether they include real or fictional worlds, variables that should be further considered by both developers and educators.
Full article
(This article belongs to the Special Issue Game-Based Learning, Gamification in Education and Serious Games 2023)
►▼
Show Figures

Figure 1
Open AccessArticle
Addressing Uncertainty in Tool Wear Prediction with Dropout-Based Neural Network
Computers 2023, 12(9), 187; https://doi.org/10.3390/computers12090187 - 19 Sep 2023
Abstract
Data-driven algorithms have been widely applied in predicting tool wear because of the high prediction performance of the algorithms, availability of data sets, and advancements in computing capabilities in recent years. Although most algorithms are supposed to generate outcomes with high precision and
[...] Read more.
Data-driven algorithms have been widely applied in predicting tool wear because of the high prediction performance of the algorithms, availability of data sets, and advancements in computing capabilities in recent years. Although most algorithms are supposed to generate outcomes with high precision and accuracy, this is not always true in practice. Uncertainty exists in distinct phases of applying data-driven algorithms due to noises and randomness in data, the presence of redundant and irrelevant features, and model assumptions. Uncertainty due to noise and missing data is known as data uncertainty. On the other hand, model assumptions and imperfection are reasons for model uncertainty. In this paper, both types of uncertainty are considered in the tool wear prediction. Empirical mode decomposition is applied to reduce uncertainty from raw data. Additionally, the Monte Carlo dropout technique is used in training a neural network algorithm to incorporate model uncertainty. The unique feature of the proposed method is that it estimates tool wear as an interval, and the interval range represents the degree of uncertainty. Different performance measurement matrices are used to compare the proposed method. It is shown that the proposed approach can predict tool wear with higher accuracy.
Full article
(This article belongs to the Special Issue Uncertainty-Aware Artificial Intelligence)
►▼
Show Figures

Figure 1
Open AccessArticle
Video Summarization Based on Feature Fusion and Data Augmentation
Computers 2023, 12(9), 186; https://doi.org/10.3390/computers12090186 - 15 Sep 2023
Abstract
During the last few years, several technological advances have led to an increase in the creation and consumption of audiovisual multimedia content. Users are overexposed to videos via several social media or video sharing websites and mobile phone applications. For efficient browsing, searching,
[...] Read more.
During the last few years, several technological advances have led to an increase in the creation and consumption of audiovisual multimedia content. Users are overexposed to videos via several social media or video sharing websites and mobile phone applications. For efficient browsing, searching, and navigation across several multimedia collections and repositories, e.g., for finding videos that are relevant to a particular topic or interest, this ever-increasing content should be efficiently described by informative yet concise content representations. A common solution to this problem is the construction of a brief summary of a video, which could be presented to the user, instead of the full video, so that she/he could then decide whether to watch or ignore the whole video. Such summaries are ideally more expressive than other alternatives, such as brief textual descriptions or keywords. In this work, the video summarization problem is approached as a supervised classification task, which relies on feature fusion of audio and visual data. Specifically, the goal of this work is to generate dynamic video summaries, i.e., compositions of parts of the original video, which include its most essential video segments, while preserving the original temporal sequence. This work relies on annotated datasets on a per-frame basis, wherein parts of videos are annotated as being “informative” or “noninformative”, with the latter being excluded from the produced summary. The novelties of the proposed approach are, (a) prior to classification, a transfer learning strategy to use deep features from pretrained models is employed. These models have been used as input to the classifiers, making them more intuitive and robust to objectiveness, and (b) the training dataset was augmented by using other publicly available datasets. The proposed approach is evaluated using three datasets of user-generated videos, and it is demonstrated that deep features and data augmentation are able to improve the accuracy of video summaries based on human annotations. Moreover, it is domain independent, could be used on any video, and could be extended to rely on richer feature representations or include other data modalities.
Full article
(This article belongs to the Special Issue Artificial Intelligence Models, Tools and Applications with A Social and Semantic Impact)
►▼
Show Figures

Figure 1
Open AccessArticle
Specification Mining over Temporal Data
Computers 2023, 12(9), 185; https://doi.org/10.3390/computers12090185 - 14 Sep 2023
Abstract
Current specification mining algorithms for temporal data rely on exhaustive search approaches, which become detrimental in real data settings where a plethora of distinct temporal behaviours are recorded over prolonged observations. This paper proposes a novel algorithm, Bolt2, based on a refined heuristic
[...] Read more.
Current specification mining algorithms for temporal data rely on exhaustive search approaches, which become detrimental in real data settings where a plethora of distinct temporal behaviours are recorded over prolonged observations. This paper proposes a novel algorithm, Bolt2, based on a refined heuristic search of our previous algorithm, Bolt. Our experiments show that the proposed approach not only surpasses exhaustive search methods in terms of running time but also guarantees a minimal description that captures the overall temporal behaviour. This is achieved through a hypothesis lattice search that exploits support metrics. Our novel specification mining algorithm also outperforms the results achieved in our previous contribution.
Full article
(This article belongs to the Special Issue Advances in Database Engineered Applications 2023)
►▼
Show Figures

Figure 1
Open AccessArticle
Process-Oriented Requirements Definition and Analysis of Software Components in Critical Systems
Computers 2023, 12(9), 184; https://doi.org/10.3390/computers12090184 - 14 Sep 2023
Abstract
Requirements management is a key aspect in the development of software components, since complex systems are often subject to frequent updates due to continuously changing requirements. This is especially true in critical systems, i.e., systems whose failure or malfunctioning may lead to severe
[...] Read more.
Requirements management is a key aspect in the development of software components, since complex systems are often subject to frequent updates due to continuously changing requirements. This is especially true in critical systems, i.e., systems whose failure or malfunctioning may lead to severe consequences. This paper proposes a three-step approach that incrementally refines a critical system specification, from a lightweight high-level model targeted to stakeholders, down to a formal standard model that links requirements, processes and data. The resulting model provides the requirements specification used to feed the subsequent development, verification and maintenance activities, and can also be seen as a first step towards the development of a digital twin of the physical system.
Full article
(This article belongs to the Special Issue Recent Advances in Digital Twins and Cognitive Twins)
►▼
Show Figures

Figure 1
Open AccessArticle
Enhancing Counterfeit Detection with Multi-Features on Secure 2D Grayscale Codes
by
, , , , , and
Computers 2023, 12(9), 183; https://doi.org/10.3390/computers12090183 - 14 Sep 2023
Abstract
Counterfeit products have become a pervasive problem in the global marketplace, necessitating effective strategies to protect both consumers and brands. This study examines the role of cybersecurity in addressing counterfeiting issues, specifically focusing on a multi-level grayscale watermark-based authentication system. The system comprises
[...] Read more.
Counterfeit products have become a pervasive problem in the global marketplace, necessitating effective strategies to protect both consumers and brands. This study examines the role of cybersecurity in addressing counterfeiting issues, specifically focusing on a multi-level grayscale watermark-based authentication system. The system comprises a generator responsible for creating a secure 2D code, and an authenticator designed to extract watermark information and verify product authenticity. To authenticate the secure 2D code, we propose various features, including the analysis of the spatial domain, frequency domain, and grayscale watermark distribution. Furthermore, we emphasize the importance of selecting appropriate interpolation methods to enhance counterfeit detection. Our proposed approach demonstrates remarkable performance, achieving precision, recall, and specificities surpassing 84.8%, 83.33%, and 84.5%, respectively, across different datasets.
Full article
(This article belongs to the Special Issue Using New Technologies on Cyber Security Solutions)
►▼
Show Figures

Figure 1
Open AccessArticle
Multispectral Image Generation from RGB Based on WSL Color Representation: Wavelength, Saturation, and Lightness
by
Computers 2023, 12(9), 182; https://doi.org/10.3390/computers12090182 - 13 Sep 2023
Abstract
►▼
Show Figures
Image processing techniques are based nearly exclusively on RGB (red–green–blue) representation, which is significantly influenced by technological issues. The RGB triplet represents a mixture of the wavelength, saturation, and lightness values of light. It leads to unexpected chromaticity artifacts in processing. Therefore, processing
[...] Read more.
Image processing techniques are based nearly exclusively on RGB (red–green–blue) representation, which is significantly influenced by technological issues. The RGB triplet represents a mixture of the wavelength, saturation, and lightness values of light. It leads to unexpected chromaticity artifacts in processing. Therefore, processing based on the wavelength, saturation, and lightness should be more resistant to the introduction of color artifacts. The proposed process of converting RGB values to corresponding wavelengths is not straightforward. In this contribution, a novel simple and accurate method for extracting the wavelength, saturation, and lightness of a color represented by an RGB triplet is described. The conversion relies on the known RGB values of the rainbow spectrum and accommodates variations in color saturation.
Full article

Figure 1
Open AccessArticle
Building an Expert System through Machine Learning for Predicting the Quality of a Website Based on Its Completion
by
, , , , and
Computers 2023, 12(9), 181; https://doi.org/10.3390/computers12090181 - 11 Sep 2023
Abstract
►▼
Show Figures
The main channel for disseminating information is now the Internet. Users have different expectations for the calibre of websites regarding the posted and presented content. The website’s quality is influenced by up to 120 factors, each represented by two to fifteen attributes. A
[...] Read more.
The main channel for disseminating information is now the Internet. Users have different expectations for the calibre of websites regarding the posted and presented content. The website’s quality is influenced by up to 120 factors, each represented by two to fifteen attributes. A major challenge is quantifying the features and evaluating the quality of a website based on the feature counts. One of the aspects that determines a website’s quality is its completeness, which focuses on the existence of all the objects and their connections with one another. It is not easy to build an expert model based on feature counts to evaluate website quality, so this paper has focused on that challenge. Both a methodology for calculating a website’s quality and a parser-based approach for measuring feature counts are offered. We provide a multi-layer perceptron model that is an expert model for forecasting website quality from the “completeness” perspective. The accuracy of the predictions is 98%, whilst the accuracy of the nearest model is 87%.
Full article

Figure 1
Open AccessArticle
Developing a Novel Hierarchical VPLS Architecture Using Q-in-Q Tunneling in Router and Switch Design
Computers 2023, 12(9), 180; https://doi.org/10.3390/computers12090180 - 07 Sep 2023
Abstract
Virtual Private LAN Services (VPLS) is an ethernet-based Virtual Private Network (VPN) service that provides multipoint-to-multipoint Layer 2 VPN service, where each site is geographically dispersed across a Wide Area Network (WAN). The adaptability and scalability of VPLS are limited despite the fact
[...] Read more.
Virtual Private LAN Services (VPLS) is an ethernet-based Virtual Private Network (VPN) service that provides multipoint-to-multipoint Layer 2 VPN service, where each site is geographically dispersed across a Wide Area Network (WAN). The adaptability and scalability of VPLS are limited despite the fact that they provide a flexible solution for connecting geographically dispersed sites. Furthermore, the construction of tunnels connecting customer locations that are separated by great distances adds a substantial amount of latency to the user traffic transportation. To address these issues, a novel Hierarchical VPLS (H-VPLS) architecture has been developed using 802.1Q tunneling (also known as Q-in-Q) on high-speed and commodity routers to satisfy the additional requirements of new VPLS applications. The Vector Packet Processing (VPP) performs as the router’s data plane, and FRRouting (FRR), an open-source network routing software suite, acts as the router’s control plane. The router is designed to seamlessly forward VPLS packets using the Request For Comments (RFCs) 4762, 4446, 4447, 4448, and 4385 from The Internet Engineering Task Force (IETF) integrated with VPP. In addition, the Label Distribution Protocol (LDP) is used for Multi-Protocol Label Switching (MPLS) Pseudo-Wire (PW) signaling in FRR. The proposed mechanism has been implemented on a software-based router in the Linux environment and tested for its functionality, signaling, and control plane processes. The router is also implemented on commodity hardware for testing the functionality of VPLS in the real world. Finally, the analysis of the results verifies the efficiency of the proposed mechanism in terms of throughput, latency, and packet loss ratio.
Full article
(This article belongs to the Special Issue Advances in High-Performance Switching and Routing)
►▼
Show Figures

Figure 1
Open AccessArticle
Optimized Downlink Scheduling over LTE Network Based on Artificial Neural Network
by
, , , and
Computers 2023, 12(9), 179; https://doi.org/10.3390/computers12090179 - 07 Sep 2023
Abstract
►▼
Show Figures
Long-Term Evolution (LTE) technology is utilized efficiently for wireless broadband communication for mobile devices. It provides flexible bandwidth and frequency with high speed and peak data rates. Optimizing resource allocation is vital for improving the performance of the Long-Term Evolution (LTE) system and
[...] Read more.
Long-Term Evolution (LTE) technology is utilized efficiently for wireless broadband communication for mobile devices. It provides flexible bandwidth and frequency with high speed and peak data rates. Optimizing resource allocation is vital for improving the performance of the Long-Term Evolution (LTE) system and meeting the user’s quality of service (QoS) needs. The resource distribution in video streaming affects the LTE network performance, reducing network fairness and causing increased delay and lower data throughput. This study proposes a novel approach utilizing an artificial neural network (ANN) based on normalized radial basis function NN (RBFNN) and generalized regression NN (GRNN) techniques. The 3rd Generation Partnership Project (3GPP) is proposed to derive accurate and reliable data output using the LTE downlink scheduling algorithms. The performance of the proposed methods is compared based on their packet loss rate, throughput, delay, spectrum efficiency, and fairness factors. The results of the proposed algorithm significantly improve the efficiency of real-time streaming compared to the LTE-DL algorithms. These improvements are also shown in the form of lower computational complexity.
Full article

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Computers, Entropy, Information, Mathematics
Selected Papers from ICCAI 2023 and IMIP 2023
Topic Editors: Zhitao Xiao, Guangxu LiDeadline: 31 October 2023
Topic in
Applied Sciences, BDCC, Computers, Electronics, JSAN, Inventions, Technologies, Telecom
Electronic Communications, IOT and Big Data
Topic Editors: Teen-Hang Meen, Charles Tijus, Cheng-Chien Kuo, Kuei-Shu Hsu, Kuo-Kuang Fan, Jih-Fu TuDeadline: 30 November 2023
Topic in
Applied Sciences, Computers, Electronics, Sensors, Virtual Worlds
Simulations and Applications of Augmented and Virtual Reality
Topic Editors: Radu Comes, Dorin-Mircea Popovici, Calin Gheorghe Dan Neamtu, Jing-Jing FangDeadline: 20 December 2023
Topic in
Applied Sciences, Computers, Information, J. Imaging, Mathematics
Research on Deep Neural Networks for Electrocardiogram Classification and Automatic Diagnosis of Arrhythmia
Topic Editors: Vidya Sudarshan, Ru San TanDeadline: 31 December 2023

Conferences
Special Issues
Special Issue in
Computers
Blockchain Technology – a Breakthrough Innovation for Modern Industries
Guest Editors: Nino Adamashvili, Caterina Tricase, Otar Zumburidze, Radu State, Roberto TonelliDeadline: 20 October 2023
Special Issue in
Computers
Edge and Fog Computing for Internet of Things Systems 2023
Guest Editors: Jorge Coelho, Luís NogueiraDeadline: 31 October 2023
Special Issue in
Computers
Selected Papers from the 23rd International Conference on Computational Science and Its Applications (ICCSA 2023)
Guest Editors: Osvaldo Gervasi, Damiano PerriDeadline: 15 November 2023
Special Issue in
Computers
Applied ML for Industrial IoT
Guest Editors: Muhammad Syafrudin, Ganjar Alfian, Norma Latif FitriyaniDeadline: 1 December 2023