Journal Description
Computers
Computers
is an international, scientific, peer-reviewed, open access journal of computer science, including computer and network architecture and computer–human interaction as its main foci, published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), dblp, Inspec, and other databases.
- Journal Rank: JCR - Q2 (Computer Science, Interdisciplinary Applications) / CiteScore - Q2 (Computer Networks and Communications)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 17.2 days after submission; acceptance to publication is undertaken in 3.9 days (median values for papers published in this journal in the first half of 2024).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
2.6 (2023);
5-Year Impact Factor:
2.4 (2023)
Latest Articles
Optimized Machine Learning Classifiers for Symptom-Based Disease Screening
Computers 2024, 13(9), 233; https://doi.org/10.3390/computers13090233 (registering DOI) - 14 Sep 2024
Abstract
This work presents a disease detection classifier based on symptoms encoded by their severity. This model is presented as part of the solution to the saturation of the healthcare system, aiding in the initial screening stage. An open-source dataset is used, which undergoes
[...] Read more.
This work presents a disease detection classifier based on symptoms encoded by their severity. This model is presented as part of the solution to the saturation of the healthcare system, aiding in the initial screening stage. An open-source dataset is used, which undergoes pre-processing and serves as the data source to train and test various machine learning models, including SVM, RFs, KNN, and ANNs. A three-phase optimization process is developed to obtain the best classifier: first, the dataset is pre-processed; secondly, a grid search is performed with several hyperparameter variations to each classifier; and, finally, the best models obtained are subjected to additional filtering processes. The best-results model, selected based on the performance and the execution time, is a KNN with 2 neighbors, which achieves an accuracy and F1 score of over 98%. These results demonstrate the effectiveness and improvement of the evaluated models compared to previous studies, particularly in terms of accuracy. Although the ANN model has a longer execution time compared to KNN, it is retained in this work due to its potential to handle more complex datasets in a real clinical context.
Full article
(This article belongs to the Special Issue Future Systems Based on Healthcare 5.0 for Pandemic Preparedness 2024)
►
Show Figures
Open AccessArticle
Leveraging Large Language Models with Chain-of-Thought and Prompt Engineering for Traffic Crash Severity Analysis and Inference
by
Hao Zhen, Yucheng Shi, Yongcan Huang, Jidong J. Yang and Ninghao Liu
Computers 2024, 13(9), 232; https://doi.org/10.3390/computers13090232 (registering DOI) - 14 Sep 2024
Abstract
Harnessing the power of Large Language Models (LLMs), this study explores the use of three state-of-the-art LLMs, specifically GPT-3.5-turbo, LLaMA3-8B, and LLaMA3-70B, for crash severity analysis and inference, framing it as a classification task. We generate textual narratives from original traffic crash tabular
[...] Read more.
Harnessing the power of Large Language Models (LLMs), this study explores the use of three state-of-the-art LLMs, specifically GPT-3.5-turbo, LLaMA3-8B, and LLaMA3-70B, for crash severity analysis and inference, framing it as a classification task. We generate textual narratives from original traffic crash tabular data using a pre-built template infused with domain knowledge. Additionally, we incorporated Chain-of-Thought (CoT) reasoning to guide the LLMs in analyzing the crash causes and then inferring the severity. This study also examine the impact of prompt engineering specifically designed for crash severity inference. The LLMs were tasked with crash severity inference to: (1) evaluate the models’ capabilities in crash severity analysis, (2) assess the effectiveness of CoT and domain-informed prompt engineering, and (3) examine the reasoning abilities with the CoT framework. Our results showed that LLaMA3-70B consistently outperformed the other models, particularly in zero-shot settings. The CoT and Prompt Engineering techniques significantly enhanced performance, improving logical reasoning and addressing alignment issues. Notably, the CoT offers valuable insights into LLMs’ reasoning process, unleashing their capacity to consider diverse factors such as environmental conditions, driver behavior, and vehicle characteristics in severity analysis and inference.
Full article
(This article belongs to the Special Issue Natural Language Processing (NLP) and Large Language Modelling)
►▼
Show Figures
Figure 1
Open AccessArticle
Assessing the Impact of Prolonged Sitting and Poor Posture on Lower Back Pain: A Photogrammetric and Machine Learning Approach
by
Valentina Markova, Miroslav Markov, Zornica Petrova and Silviya Filkova
Computers 2024, 13(9), 231; https://doi.org/10.3390/computers13090231 (registering DOI) - 14 Sep 2024
Abstract
Prolonged static sitting at the workplace is considered one of the main risks for the development of musculoskeletal disorders (MSDs) and adverse health effects. Factors such as poor posture and extended sitting are perceived to be a reason for conditions such as lumbar
[...] Read more.
Prolonged static sitting at the workplace is considered one of the main risks for the development of musculoskeletal disorders (MSDs) and adverse health effects. Factors such as poor posture and extended sitting are perceived to be a reason for conditions such as lumbar discomfort and lower back pain (LBP), even though the scientific explanation of this relationship is still unclear and raises disputes in the scientific community. The current study focused on evaluating the relationship between LBP and prolonged sitting in poor posture using photogrammetric images, postural angle calculation, machine learning models, and questionnaire-based self-reports regarding the occurrence of LBP and similar symptoms among the participants. Machine learning models trained with this data are employed to recognize poor body postures. Two scenarios have been elaborated for modeling purposes: scenario 1, based on natural body posture tagged as correct and incorrect, and scenario 2, based on incorrect body postures, corrected additionally by the rehabilitator. The achieved accuracies of respectively 75.3% and 85% for both scenarios reveal the potential for future research in enhancing awareness and actively managing posture-related issues that elevate the likelihood of developing lower back pain symptoms.
Full article
(This article belongs to the Special Issue Applications of Machine Learning and Artificial Intelligence for Healthcare)
►▼
Show Figures
Figure 1
Open AccessArticle
Transfer of Periodic Phenomena in Multiphase Capillary Flows to a Quasi-Stationary Observation Using U-Net
by
Bastian Oldach, Philipp Wintermeyer and Norbert Kockmann
Computers 2024, 13(9), 230; https://doi.org/10.3390/computers13090230 - 13 Sep 2024
Abstract
►▼
Show Figures
Miniaturization promotes the efficiency and exploration domain in scientific fields such as computer science, engineering, medicine, and biotechnology. In particular, the field of microfluidics is a flourishing technology, which deals with the manipulation of small volumes of liquid. Dispersed droplets or bubbles in
[...] Read more.
Miniaturization promotes the efficiency and exploration domain in scientific fields such as computer science, engineering, medicine, and biotechnology. In particular, the field of microfluidics is a flourishing technology, which deals with the manipulation of small volumes of liquid. Dispersed droplets or bubbles in a second immiscible liquid are of great interest for screening applications or chemical and biochemical reactions. However, since very small dimensions are characterized by phenomena that differ from those at macroscopic scales, a deep understanding of physics is crucial for effective device design. Due to small volumes in miniaturized systems, common measurement techniques are not applicable as they exceed the dimensions of the device by a multitude. Hence, image analysis is commonly chosen as a method to understand ongoing phenomena. Artificial Intelligence is now the state of the art for recognizing patterns in images or analyzing datasets that are too large for humans to handle. X-ray-based Computer Tomography adds a third dimension to images, which results in more information, but ultimately, also in more complex image analysis. In this work, we present the application of the U-Net neural network to extract certain states during droplet formation in a capillary, which forms a constantly repeated process that is captured on tens of thousands of CT images. The experimental setup features a co-flow setup that is based on 3D-printed capillaries with two different cross-sections with an inner diameter, respectively edge length of 1.6 mm. For droplet formation, water was dispersed in silicon oil. The classification into different droplet states allows for 3D reconstruction and a time-resolved 3D analysis of the present phenomena. The original U-Net was modified to process input images of a size of 688 × 432 pixels while the structure of the encoder and decoder path feature 23 convolutional layers. The U-Net consists of four max pooling layers and four upsampling layers. The training was performed on 90% and validated on 10% of a dataset containing 492 images showing different states of droplet formation. A mean Intersection over Union of 0.732 was achieved for a training of 50 epochs, which is considered a good performance. The presented U-Net needs 120 ms per image to process 60,000 images to categorize emerging droplets into 24 states at 905 angles. Once the model is trained sufficiently, it provides accurate segmentation for various flow conditions. The selected images are used for 3D reconstruction enabling the 2D and 3D quantification of emerging droplets in capillaries that feature circular and square cross-sections. By applying this method, a temporal resolution of 25–40 ms was achieved. Droplets that are emerging in capillaries with a square cross-section become bigger under the same flow conditions in comparison to capillaries with a circular cross section. The presented methodology is promising for other periodic phenomena in different scientific disciplines that focus on imaging techniques.
Full article
Figure 1
Open AccessArticle
Deep Learning for Predicting Attrition Rate in Open and Distance Learning (ODL) Institutions
by
Juliana Ngozi Ndunagu, David Opeoluwa Oyewola, Farida Shehu Garki, Jude Chukwuma Onyeakazi, Christiana Uchenna Ezeanya and Elochukwu Ukwandu
Computers 2024, 13(9), 229; https://doi.org/10.3390/computers13090229 - 11 Sep 2024
Abstract
Student enrollment is a vital aspect of educational institutions, encompassing active, registered and graduate students. All the same, some students fail to engage with their studies after admission and drop out along the line; this is known as attrition. The student attrition rate
[...] Read more.
Student enrollment is a vital aspect of educational institutions, encompassing active, registered and graduate students. All the same, some students fail to engage with their studies after admission and drop out along the line; this is known as attrition. The student attrition rate is acknowledged as the most complicated and significant problem facing educational systems and is caused by institutional and non-institutional challenges. In this study, the researchers utilized a dataset obtained from the National Open University of Nigeria (NOUN) from 2012 to 2022, which included comprehensive information about students enrolled in various programs at the university who were inactive and had dropped out. The researchers used deep learning techniques, such as the Long Short-Term Memory (LSTM) model and compared their performance with the One-Dimensional Convolutional Neural Network (1DCNN) model. The results of this study revealed that the LSTM model achieved overall accuracy of 57.29% on the training data, while the 1DCNN model exhibited lower accuracy of 49.91% on the training data. The LSTM indicated a superior correct classification rate compared to the 1DCNN model.
Full article
(This article belongs to the Special Issue Harnessing Artificial Intelligence for Social and Semantic Understanding)
►▼
Show Figures
Figure 1
Open AccessReview
The State of the Art of Digital Twins in Health—A Quick Review of the Literature
by
Leonardo El-Warrak and Claudio M. de Farias
Computers 2024, 13(9), 228; https://doi.org/10.3390/computers13090228 - 11 Sep 2024
Abstract
A digital twin can be understood as a representation of a real asset, in other words, a virtual replica of a physical object, process or even a system. Virtual models can integrate with all the latest technologies, such as the Internet of Things
[...] Read more.
A digital twin can be understood as a representation of a real asset, in other words, a virtual replica of a physical object, process or even a system. Virtual models can integrate with all the latest technologies, such as the Internet of Things (IoT), cloud computing, and artificial intelligence (AI). Digital twins have applications in a wide range of sectors, from manufacturing and engineering to healthcare. They have been used in managing healthcare facilities, streamlining care processes, personalizing treatments, and enhancing patient recovery. By analysing data from sensors and other sources, healthcare professionals can develop virtual models of patients, organs, and human systems, experimenting with various strategies to identify the most effective approach. This approach can lead to more targeted and efficient therapies while reducing the risk of collateral effects. Digital twin technology can also be used to generate a virtual replica of a hospital to review operational strategies, capabilities, personnel, and care models to identify areas for improvement, predict future challenges, and optimize organizational strategies. The potential impact of this tool on our society and its well-being is quite significant. This article explores how digital twins are being used in healthcare. This article also introduces some discussions on the impact of this use and future research and technology development projections for the use of digital twins in the healthcare sector.
Full article
(This article belongs to the Topic eHealth and mHealth: Challenges and Prospects, 2nd Volume)
►▼
Show Figures
Figure 1
Open AccessArticle
An Unsupervised Approach for Treatment Effectiveness Monitoring Using Curvature Learning
by
Hersh Sagreiya, Isabelle Durot and Alireza Akhbardeh
Computers 2024, 13(9), 227; https://doi.org/10.3390/computers13090227 - 9 Sep 2024
Abstract
Contrast-enhanced ultrasound could assess whether cancer chemotherapeutic agents work in days, rather than waiting 2–3 months, as is typical using the Response Evaluation Criteria in Solid Tumors (RECIST), therefore avoiding toxic side effects and expensive, ineffective therapy. A total of 40 mice were
[...] Read more.
Contrast-enhanced ultrasound could assess whether cancer chemotherapeutic agents work in days, rather than waiting 2–3 months, as is typical using the Response Evaluation Criteria in Solid Tumors (RECIST), therefore avoiding toxic side effects and expensive, ineffective therapy. A total of 40 mice were implanted with human colon cancer cells: treatment-sensitive mice in control (n = 10, receiving saline) and treated (n = 10, receiving bevacizumab) groups and treatment-resistant mice in control (n = 10) and treated (n = 10) groups. Each mouse was imaged using 3D dynamic contrast-enhanced ultrasound with Definity microbubbles. Curvature learning, an unsupervised learning approach, quantized pixels into three classes—blue, yellow, and red—representing normal, intermediate, and high cancer probability, both at baseline and after treatment. Next, a curvature learning score was calculated for each mouse using statistical measures representing variations in these three color classes across each frame from cine ultrasound images obtained during contrast administration on a given day (intra-day variability) and between pre- and post-treatment days (inter-day variability). A Wilcoxon rank-sum test compared score distributions between treated, treatment-sensitive mice and all others. There was a statistically significant difference in tumor score between the treated, treatment-sensitive group (n = 10) and all others (n = 30) (p = 0.0051). Curvature learning successfully identified treatment response, detecting changes in tumor perfusion before changes in tumor size. A similar technique could be developed for humans.
Full article
(This article belongs to the Special Issue Machine and Deep Learning in the Health Domain 2024)
►▼
Show Figures
Graphical abstract
Open AccessArticle
An Integrated Software-Defined Networking–Network Function Virtualization Architecture for 5G RAN–Multi-Access Edge Computing Slice Management in the Internet of Industrial Things
by
Francesco Chiti, Simone Morosi and Claudio Bartoli
Computers 2024, 13(9), 226; https://doi.org/10.3390/computers13090226 - 9 Sep 2024
Abstract
The Internet of Things (IoT), namely, the set of intelligent devices equipped with sensors and actuators and capable of connecting to the Internet, has now become an integral part of the most competitive industries, as it enables optimization of production processes and reduction
[...] Read more.
The Internet of Things (IoT), namely, the set of intelligent devices equipped with sensors and actuators and capable of connecting to the Internet, has now become an integral part of the most competitive industries, as it enables optimization of production processes and reduction in operating costs and maintenance time, together with improving the quality of products and services. More specifically, the term Industrial Internet of Things (IIoT) identifies the system which consists of advanced Internet-connected equipment and analytics platforms specialized for industrial activities, where IIoT devices range from small environmental sensors to complex industrial robots. This paper presents an integrated high-level SDN-NFV architecture enabling clusters of smart devices to interconnect and manage the exchange of data with distributed control processes and databases. In particular, it is focused on 5G RAN-MEC slice management in the IIoT context. The proposed system is emulated by means of two distinct real-time frameworks, demonstrating improvements in connectivity, energy efficiency, end-to-end latency and throughput. In addition, its scalability, modularity and flexibility are assessed, making this framework suitable to test advanced and more applications.
Full article
(This article belongs to the Special Issue Emerging Trends and Challenges of Software-Defined Networking (SDN) Technologies)
►▼
Show Figures
Figure 1
Open AccessArticle
SLACPSS: Secure Lightweight Authentication for Cyber–Physical–Social Systems
by
Ahmed Zedaan M. Abed, Tamer Abdelkader and Mohamed Hashem
Computers 2024, 13(9), 225; https://doi.org/10.3390/computers13090225 - 9 Sep 2024
Abstract
The concept of Cyber–Physical–Social Systems (CPSSs) has emerged as a response to the need to understand the interaction between Cyber–Physical Systems (CPSs) and humans. This shift from CPSs to CPSSs is primarily due to the widespread use of sensor-equipped smart devices that are
[...] Read more.
The concept of Cyber–Physical–Social Systems (CPSSs) has emerged as a response to the need to understand the interaction between Cyber–Physical Systems (CPSs) and humans. This shift from CPSs to CPSSs is primarily due to the widespread use of sensor-equipped smart devices that are closely connected to users. CPSSs have been a topic of interest for more than ten years, gaining increasing attention in recent years. The inclusion of human elements in CPS research has presented new challenges, particularly in understanding human dynamics, which adds complexity that has yet to be fully explored. CPSSs are a base class and consist of three basic components (cyberspace, physical space, and social space). We map the components of the metaverse with that of a CPSS, and we show that the metaverse is an implementation of a Cyber–Physical–Social System (CPSS). The metaverse is made up of computer systems with many elements, such as artificial intelligence, computer vision, image processing, mixed reality, augmented reality, and extended reality. It also comprises physical systems, controlled objects, and human interaction. The identification process in CPSSs suffers from weak security, and the authentication problem requires heavy computation. Therefore, we propose a new protocol for secure lightweight authentication in Cyber–Physical–Social Systems (SLACPSSs) to offer secure communication between platform servers and users as well as secure interactions between avatars. We perform a security analysis and compare the proposed protocol to the related previous ones. The analysis shows that the proposed protocol is lightweight and secure.
Full article
(This article belongs to the Topic Simulations and Applications of Augmented and Virtual Reality, 2nd Edition)
►▼
Show Figures
Figure 1
Open AccessArticle
Application of Proximal Policy Optimization for Resource Orchestration in Serverless Edge Computing
by
Mauro Femminella and Gianluca Reali
Computers 2024, 13(9), 224; https://doi.org/10.3390/computers13090224 - 6 Sep 2024
Abstract
Serverless computing is a new cloud computing model suitable for providing services in both large cloud and edge clusters. In edge clusters, the autoscaling functions play a key role on serverless platforms as the dynamic scaling of function instances can lead to reduced
[...] Read more.
Serverless computing is a new cloud computing model suitable for providing services in both large cloud and edge clusters. In edge clusters, the autoscaling functions play a key role on serverless platforms as the dynamic scaling of function instances can lead to reduced latency and efficient resource usage, both typical requirements of edge-hosted services. However, a badly configured scaling function can introduce unexpected latency due to so-called “cold start” events or service request losses. In this work, we focus on the optimization of resource-based autoscaling on OpenFaaS, the most-adopted open-source Kubernetes-based serverless platform, leveraging real-world serverless traffic traces. We resort to the reinforcement learning algorithm named Proximal Policy Optimization to dynamically configure the value of the Kubernetes Horizontal Pod Autoscaler, trained on real traffic. This was accomplished via a state space model able to take into account resource consumption, performance values, and time of day. In addition, the reward function definition promotes Service-Level Agreement (SLA) compliance. We evaluate the proposed agent, comparing its performance in terms of average latency, CPU usage, memory usage, and loss percentage with respect to the baseline system. The experimental results show the benefits provided by the proposed agent, obtaining a service time within the SLA while limiting resource consumption and service loss.
Full article
(This article belongs to the Special Issue Advances in High-Performance Switching and Routing)
►▼
Show Figures
Figure 1
Open AccessReview
A Survey of Blockchain Applicability, Challenges, and Key Threats
by
Catalin Daniel Morar and Daniela Elena Popescu
Computers 2024, 13(9), 223; https://doi.org/10.3390/computers13090223 - 6 Sep 2024
Abstract
With its decentralized, immutable, and consensus-based validation features, blockchain technology has grown from early financial applications to a variety of different sectors. This paper aims to outline various applications of the blockchain, and systematically identify general challenges and key threats regarding its adoption.
[...] Read more.
With its decentralized, immutable, and consensus-based validation features, blockchain technology has grown from early financial applications to a variety of different sectors. This paper aims to outline various applications of the blockchain, and systematically identify general challenges and key threats regarding its adoption. The challenges are organized into even broader groups, to allow a clear overview and identification of interconnected issues. Potential solutions are introduced into the discussion, addressing their possible ways of mitigating these challenges and their forward-looking effects in fostering the adoption of blockchain technology. The paper also highlights some potential directions for future research that may overcome these challenges to unlock further applications. More generally, the article attempts to describe the potential transformational implications of blockchain technology, through the manner in which it may contribute to the advancement of a diversity of industries.
Full article
(This article belongs to the Special Issue Next Generation Blockchain, Information Security and Soft Computing for Future IoT Networks)
►▼
Show Figures
Figure 1
Open AccessArticle
Usability Heuristics for Metaverse
by
Khalil Omar, Hussam Fakhouri, Jamal Zraqou and Jorge Marx Gómez
Computers 2024, 13(9), 222; https://doi.org/10.3390/computers13090222 - 6 Sep 2024
Abstract
The inclusion of usability heuristics into the metaverse is aimed at solving the unique issues raised by virtual reality (VR), augmented reality (AR), and mixed reality (MR) environments. This research points out the usability challenges of metaverse user interfaces (UIs), such as information
[...] Read more.
The inclusion of usability heuristics into the metaverse is aimed at solving the unique issues raised by virtual reality (VR), augmented reality (AR), and mixed reality (MR) environments. This research points out the usability challenges of metaverse user interfaces (UIs), such as information overloading, complex navigation, and the need for intuitive control mechanisms in these immersive spaces. By adapting the existing usability models to suit the metaverse context, this study presents a detailed list of heuristics and sub-heuristics that are designed to improve the overall usability of metaverse UIs. These heuristics are essential when it comes to creating user-friendly, inclusive, and captivating virtual environments (VEs) that take care of the needs of three-dimensional interactions, social dynamics demands, and integration with digital–physical worlds. It should be noted that these heuristics have to keep up with new technological advancements, as well as changing expectations from users, hence ensuring a positive user experience (UX) within the metaverse.
Full article
(This article belongs to the Special Issue Gamification and Serious Games Applications in Immersive Learning Environments)
►▼
Show Figures
Figure 1
Open AccessArticle
Teach Programming Using Task-Driven Case Studies: Pedagogical Approach, Guidelines, and Implementation
by
Jaroslav Porubän, Milan Nosál’, Matúš Sulír and Sergej Chodarev
Computers 2024, 13(9), 221; https://doi.org/10.3390/computers13090221 - 5 Sep 2024
Abstract
Despite the effort invested to improve the teaching of programming, students often face problems with understanding its principles when using traditional learning approaches. This paper presents a novel teaching method for programming, combining the task-driven methodology and the case study approach. This method
[...] Read more.
Despite the effort invested to improve the teaching of programming, students often face problems with understanding its principles when using traditional learning approaches. This paper presents a novel teaching method for programming, combining the task-driven methodology and the case study approach. This method is called a task-driven case study. The case study aspect should provide a real-world context for the examples used to explain the required knowledge. The tasks guide students during the course to ensure that they will not fall into bad practices. We provide reasoning for using the combination of these two methodologies and define the essential properties of our method. Using a specific example of the Minesweeper case study from the Java technologies course, the readers are guided through the process of the case study selection, solution implementation, study guide writing, and course execution. The teachers’ and students’ experiences with this approach, including its advantages and potential drawbacks, are also summarized.
Full article
(This article belongs to the Special Issue Future Trends in Computer Programming Education)
►▼
Show Figures
Figure 1
Open AccessArticle
Digital Genome and Self-Regulating Distributed Software Applications with Associative Memory and Event-Driven History
by
Rao Mikkilineni, W. Patrick Kelly and Gideon Crawley
Computers 2024, 13(9), 220; https://doi.org/10.3390/computers13090220 - 5 Sep 2024
Abstract
Biological systems have a unique ability inherited through their genome. It allows them to build, operate, and manage a society of cells with complex organizational structures, where autonomous components execute specific tasks and collaborate in groups to fulfill systemic goals with shared knowledge.
[...] Read more.
Biological systems have a unique ability inherited through their genome. It allows them to build, operate, and manage a society of cells with complex organizational structures, where autonomous components execute specific tasks and collaborate in groups to fulfill systemic goals with shared knowledge. The system receives information from various senses, makes sense of what is being observed, and acts using its experience while the observations are still in progress. We use the General Theory of Information (GTI) to implement a digital genome, specifying the operational processes that design, deploy, operate, and manage a cloud-agnostic distributed application that is independent of IaaS and PaaS infrastructure, which provides the resources required to execute the software components. The digital genome specifies the functional and non-functional requirements that define the goals and best-practice policies to evolve the system using associative memory and event-driven interaction history to maintain stability and safety while achieving the system’s objectives. We demonstrate a structural machine, cognizing oracles, and knowledge structures derived from GTI used for designing, deploying, operating, and managing a distributed video streaming application with autopoietic self-regulation that maintains structural stability and communication among distributed components with shared knowledge while maintaining expected behaviors dictated by functional requirements.
Full article
(This article belongs to the Special Issue Software Engineering Methodologies and Languages for Event Driven and Large-Scale Management Systems (SLEMS))
►▼
Show Figures
Figure 1
Open AccessReview
Predicting Student Performance in Introductory Programming Courses
by
João P. J. Pires, Fernanda Brito Correia, Anabela Gomes, Ana Rosa Borges and Jorge Bernardino
Computers 2024, 13(9), 219; https://doi.org/10.3390/computers13090219 - 5 Sep 2024
Abstract
The importance of accurately predicting student performance in education, especially in the challenging curricular unit of Introductory Programming, cannot be overstated. As institutions struggle with high failure rates and look for solutions to improve the learning experience, the need for effective prediction methods
[...] Read more.
The importance of accurately predicting student performance in education, especially in the challenging curricular unit of Introductory Programming, cannot be overstated. As institutions struggle with high failure rates and look for solutions to improve the learning experience, the need for effective prediction methods becomes critical. This study aims to conduct a systematic review of the literature on methods for predicting student performance in higher education, specifically in Introductory Programming, focusing on machine learning algorithms. Through this study, we not only present different applicable algorithms but also evaluate their performance, using identified metrics and considering the applicability in the educational context, specifically in higher education and in Introductory Programming. The results obtained through this study allowed us to identify trends in the literature, such as which machine learning algorithms were most applied in the context of predicting students’ performance in Introductory Programming in higher education, as well as which evaluation metrics and datasets are usually used.
Full article
(This article belongs to the Special Issue Future Trends in Computer Programming Education)
►▼
Show Figures
Figure 1
Open AccessArticle
Detection of Bus Driver Mobile Phone Usage Using Kolmogorov-Arnold Networks
by
János Hollósi, Áron Ballagi, Gábor Kovács, Szabolcs Fischer and Viktor Nagy
Computers 2024, 13(9), 218; https://doi.org/10.3390/computers13090218 - 3 Sep 2024
Abstract
This research introduces a new approach for detecting mobile phone use by drivers, exploiting the capabilities of Kolmogorov-Arnold Networks (KAN) to improve road safety and comply with regulations prohibiting phone use while driving. To address the lack of available data for this specific
[...] Read more.
This research introduces a new approach for detecting mobile phone use by drivers, exploiting the capabilities of Kolmogorov-Arnold Networks (KAN) to improve road safety and comply with regulations prohibiting phone use while driving. To address the lack of available data for this specific task, a unique dataset was constructed consisting of images of bus drivers in two scenarios: driving without phone interaction and driving while on a phone call. This dataset provides the basis for the current research. Different KAN-based networks were developed for custom action recognition tailored to the nuanced task of identifying drivers holding phones. The system’s performance was evaluated against convolutional neural network-based solutions, and differences in accuracy and robustness were observed. The aim was to propose an appropriate solution for professional Driver Monitoring Systems (DMS) in research and development and to investigate the efficiency of KAN solutions for this specific sub-task. The implications of this work extend beyond enforcement, providing a foundational technology for automating monitoring and improving safety protocols in the commercial and public transport sectors. In conclusion, this study demonstrates the efficacy of KAN network layers in neural network designs for driver monitoring applications.
Full article
(This article belongs to the Special Issue Machine Learning Applications in Pattern Recognition)
►▼
Show Figures
Figure 1
Open AccessArticle
Introducing HeliEns: A Novel Hybrid Ensemble Learning Algorithm for Early Diagnosis of Helicobacter pylori Infection
by
Sultan Noman Qasem
Computers 2024, 13(9), 217; https://doi.org/10.3390/computers13090217 - 2 Sep 2024
Abstract
►▼
Show Figures
The Gram-negative bacterium Helicobacter pylori (H. infection) infects the human stomach and is a major cause of gastritis, peptic ulcers, and gastric cancer. With over 50% of the global population affected, early and accurate diagnosis of H. infection infection is crucial for effective
[...] Read more.
The Gram-negative bacterium Helicobacter pylori (H. infection) infects the human stomach and is a major cause of gastritis, peptic ulcers, and gastric cancer. With over 50% of the global population affected, early and accurate diagnosis of H. infection infection is crucial for effective treatment and prevention of severe complications. Traditional diagnostic methods, such as endoscopy with biopsy, serology, urea breath tests, and stool antigen tests, are often invasive, costly, and can lack precision. Recent advancements in machine learning (ML) and quantum machine learning (QML) offer promising non-invasive alternatives capable of analyzing complex datasets to identify patterns not easily discernible by human analysis. This research aims to develop and evaluate HeliEns, a novel quantum hybrid ensemble learning algorithm designed for the early and accurate diagnosis of H. infection infection. HeliEns combines the strengths of multiple quantum machine learning models, specifically Quantum K-Nearest Neighbors (QKNN), Quantum Naive Bayes (QNB), and Quantum Logistic Regression (QLR), to enhance diagnostic accuracy and reliability. The development of HeliEns involved rigorous data preprocessing steps, including data cleaning, encoding of categorical variables, and feature scaling, to ensure the dataset’s suitability for quantum machine learning algorithms. Individual models (QKNN, QNB, and QLR) were trained and evaluated using metrics such as accuracy, precision, recall, and F1-score. The ensemble model was then constructed by integrating these quantum models using a hybrid approach that leverages their diverse strengths. The HeliEns model demonstrated superior performance compared to individual models, achieving an accuracy of 94%, precision of 97%, recall of 92%, and an F1-score of 94% in detecting H. infection infection. The quantum ensemble approach effectively mitigated the limitations of individual models, providing a robust and reliable diagnostic tool. HeliEns significantly improved diagnostic accuracy and reliability for early H. infection detection. The integration of multiple quantum ML algorithms within the HeliEns framework enhanced overall model performance. The non-invasive nature of the HeliEns model offers a cost-effective and user-friendly alternative to traditional diagnostic methods. This research underscores the transformative potential of quantum machine learning in healthcare, particularly in enhancing diagnostic efficiency and patient outcomes. HeliEns represents a significant advancement in the early diagnosis of H. infection infection, leveraging quantum machine learning to provide a non-invasive, accurate, and reliable diagnostic tool. This research highlights the importance of QML-driven solutions in healthcare and sets the stage for future research to further refine and validate the HeliEns model in real-world clinical settings.
Full article
Figure 1
Open AccessArticle
Research on Identification of Critical Quality Features of Machining Processes Based on Complex Networks and Entropy-CRITIC Methods
by
Dongyue Qu, Wenchao Liang, Yuting Zhang, Chaoyun Gu, Guangyu Zhou and Yong Zhan
Computers 2024, 13(9), 216; https://doi.org/10.3390/computers13090216 - 30 Aug 2024
Abstract
Aiming at the difficulty in effectively identifying critical quality features in the complex machining process, this paper proposes a critical quality feature recognition method based on a machining process network. Firstly, the machining process network model is constructed based on the complex network
[...] Read more.
Aiming at the difficulty in effectively identifying critical quality features in the complex machining process, this paper proposes a critical quality feature recognition method based on a machining process network. Firstly, the machining process network model is constructed based on the complex network theory. The LeaderRank algorithm is used to identify the critical processes in the machining process. Secondly, the Entropy-CRITIC method is used to calculate the weight of the quality features of the critical processes, and the critical quality features of the critical processes are determined according to weight ranking results. Finally, the feasibility and effectiveness of the method are verified by taking the medium-speed marine diesel engine coupling rod machining as an example. The results show that the method can still effectively identify the critical quality features in the case of small sample data and provide support for machining process optimization and quality control, thus improving product consistency, reliability, and machining efficiency.
Full article
(This article belongs to the Topic Innovation, Communication and Engineering)
►▼
Show Figures
Figure 1
Open AccessArticle
A Study of a Drawing Exactness Assessment Method Using Localized Normalized Cross-Correlations in a Portrait Drawing Learning Assistant System
by
Yue Zhang, Zitong Kong, Nobuo Funabiki and Chen-Chien Hsu
Computers 2024, 13(9), 215; https://doi.org/10.3390/computers13090215 - 23 Aug 2024
Abstract
Nowadays, portrait drawing has gained significance in cultivating painting skills and human sentiments. In practice, novices often struggle with this art form without proper guidance from professionals, since they lack understanding of the proportions and structures of facial features. To solve this limitation,
[...] Read more.
Nowadays, portrait drawing has gained significance in cultivating painting skills and human sentiments. In practice, novices often struggle with this art form without proper guidance from professionals, since they lack understanding of the proportions and structures of facial features. To solve this limitation, we have developed a Portrait Drawing Learning Assistant System (PDLAS) to assist novices in learning portrait drawing. The PDLAS provides auxiliary lines as references for facial features that are extracted by applying OpenPose and OpenCV libraries to a face photo image of the target. A learner can draw a portrait on an iPad using drawing software where the auxiliary lines appear on a different layer to the portrait. However, in the current implementation, the PDLAS does not offer a function to assess the exactness of the drawing result for feedback to the learner. In this paper, we present a drawing exactness assessment method using a Localized Normalized Cross-Correlation (NCC) algorithm in the PDLAS. NCC gives a similarity score between the original face photo and drawing result images by calculating the correlation of the brightness distributions. For precise feedback, the method calculates the NCC for each face component by extracting the bounding box. In addition, in this paper, we improve the auxiliary lines for the nose. For evaluations, we asked students at Okayama University, Japan, to draw portraits using the PDLAS, and applied the proposed method to their drawing results, where the application results validated the effectiveness by suggesting improvements in drawing components. The system usability was also confirmed through a questionnaire with a SUS score. The main finding of this research is that the implementation of the NCC algorithm within the PDLAS significantly enhances the accuracy of novice portrait drawings by providing detailed feedback on specific facial features, proving the system’s efficacy in art education and training.
Full article
(This article belongs to the Special Issue Smart Learning Environments)
►▼
Show Figures
Figure 1
Open AccessArticle
Self-Adaptive Evolutionary Info Variational Autoencoder
by
Toby A. Emm and Yu Zhang
Computers 2024, 13(8), 214; https://doi.org/10.3390/computers13080214 - 22 Aug 2024
Abstract
With the advent of increasingly powerful machine learning algorithms and the ability to rapidly obtain accurate aerodynamic performance data, there has been a steady rise in the use of algorithms for automated aerodynamic design optimisation. However, long training times, high-dimensional design spaces and
[...] Read more.
With the advent of increasingly powerful machine learning algorithms and the ability to rapidly obtain accurate aerodynamic performance data, there has been a steady rise in the use of algorithms for automated aerodynamic design optimisation. However, long training times, high-dimensional design spaces and rapid geometry alteration pose barriers to this becoming an efficient and worthwhile process. The variational autoencoder (VAE) is a probabilistic generative model capable of learning a low-dimensional representation of high-dimensional input data. Despite their impressive power, VAEs suffer from several issues, resulting in poor model performance and limiting optimisation capability. Several approaches have been proposed in attempts to fix these issues. This study combines the approaches of loss function modification with evolutionary hyperparameter tuning, introducing a new self-adaptive evolutionary info variational autoencoder (SA-eInfoVAE). The proposed model is validated against previous models on the MNIST handwritten digits dataset, assessing the total model performance. The proposed model is then applied to an aircraft image dataset to assess the applicability and complications involved with complex datasets such as those used for aerodynamic design optimisation. The results obtained on the MNIST dataset show improved inference in conjunction with increased generative and reconstructive performance. This is validated through a thorough comparison against baseline models, including quantitative metrics reconstruction error, loss function calculation and disentanglement percentage. A number of qualitative image plots provide further comparison of the generative and reconstructive performance, as well as the strength of latent encodings. Furthermore, the results on the aircraft image dataset show the proposed model can produce high-quality reconstructions and latent encodings. The analysis suggests, given a high-quality dataset and optimal network structure, the proposed model is capable of outperforming the current VAE models, reducing the training time cost and improving the quality of automated aerodynamic design optimisation.
Full article
(This article belongs to the Special Issue Generative Artificial Intelligence and Machine Learning in Industrial Processes and Manufacturing)
►▼
Show Figures
Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Biomedicines, Computers, Information, IJERPH, JPM
eHealth and mHealth: Challenges and Prospects, 2nd Volume
Topic Editors: Antonis Billis, Manuel Dominguez-Morales, Anton CivitDeadline: 30 September 2024
Topic in
Applied Sciences, Computers, Electronics, JSAN, Technologies
Emerging AI+X Technologies including Selected Papers from ICGHIT
Topic Editors: Byung-Seo Kim, Hyunsik Ahn, Kyu-Tae LeeDeadline: 31 October 2024
Topic in
Computers, Informatics, Information, Logistics, Mathematics, Algorithms
Decision Science Applications and Models (DSAM)
Topic Editors: Daniel Riera Terrén, Angel A. Juan, Majsa Ammuriova, Laura CalvetDeadline: 31 December 2024
Topic in
Energies, Applied Sciences, Mathematics, Entropy, Computers
Numerical Methods and Computer Simulations in Energy Analysis, 2nd Edition
Topic Editors: Marcin Kamiński, Mateus MendesDeadline: 20 January 2025
Conferences
Special Issues
Special Issue in
Computers
Artificial Intelligence in Control
Guest Editors: Ivan Maximov, Mads Sloth Vinding, Christoph AignerDeadline: 30 September 2024
Special Issue in
Computers
Wireless Sensor Network, IoT and Cloud Computing Technologies for Smart Cities
Guest Editor: Lilatul FerdouseDeadline: 30 September 2024
Special Issue in
Computers
Uncertainty-Aware Artificial Intelligence
Guest Editors: Hussain Mohammed Dipu Kabir, Syed Bahauddin Alam, Subrota Kumar Mondal, Jeremy StraubDeadline: 30 September 2024
Special Issue in
Computers
Software-Defined Internet of Everything
Guest Editors: Yanxiao Zhao, Guodong WangDeadline: 30 September 2024