Next Issue
Volume 13, May
Previous Issue
Volume 13, March
 
 

Computers, Volume 13, Issue 4 (April 2024) – 25 articles

Cover Story (view full-size image): Transformers have emerged as a major deep-learning architecture, with their diffusion spreading over a wide population due to some popular user-friendly interfaces, and their use extending from the original NLP domain to images and other forms of data. However, their user-friendliness has not translated into an equal degree of transparency.  Transformers maintain the level of opacity typically associated with deep-learning architectures. However, efforts are underway to add explainability to the features of transformers. In this paper, a review of the current status of those efforts and their observed trends is provided. The major explainability techniques are described by adopting a taxonomy based on the component of the architecture that is exploited to explain the transformer’s results. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
30 pages, 8167 KiB  
Article
Performance Evaluation and Analysis of Urban-Suburban 5G Cellular Networks
by Aymen I. Zreikat and Shinu Mathew
Computers 2024, 13(4), 108; https://doi.org/10.3390/computers13040108 - 22 Apr 2024
Viewed by 296
Abstract
5G is the fifth-generation technology standard for the new generation of cellular networks. Combining 5G and millimeter waves (mmWave) gives tremendous capacity and even lower latency, allowing you to fully enjoy the 5G experience. 5G is the successor to the fourth generation (4G) [...] Read more.
5G is the fifth-generation technology standard for the new generation of cellular networks. Combining 5G and millimeter waves (mmWave) gives tremendous capacity and even lower latency, allowing you to fully enjoy the 5G experience. 5G is the successor to the fourth generation (4G) which provides high-speed networks to support traffic capacity, higher throughput, and network efficiency as well as supporting massive applications, especially internet-of-things (IoT) and machine-to-machine areas. Therefore, performance evaluation and analysis of such systems is a critical research task that needs to be conducted by researchers. In this paper, a new model structure of an urban-suburban environment in a 5G network formed of seven cells with a central urban cell (Hot spot) surrounded by six suburban cells is introduced. With the proposed model, the end-user can have continuous connectivity under different propagation environments. Based on the suggested model, the related capacity bounds are derived and the performance of 5G network is studied via a simulation considering different parameters that affect the performance such as the non-orthogonality factor, the load concentration in both urban and suburban areas, the height of the mobile, the height of the base station, the radius, and the distance between base stations. Blocking probability and bandwidth utilization are the main two performance measures that are studied, however, the effect of the above parameters on the system capacity is also introduced. The provided numerical results that are based on a network-level call admission control algorithm reveal the fact that the investigated parameters have a major influence on the network performance. Therefore, the outcome of this research can be a very useful tool to be considered by mobile operators in the network planning of 5G. Full article
Show Figures

Figure 1

19 pages, 1329 KiB  
Article
Blockchain Integration and Its Impact on Renewable Energy
by Hamed Taherdoost
Computers 2024, 13(4), 107; https://doi.org/10.3390/computers13040107 - 22 Apr 2024
Viewed by 303
Abstract
This paper investigates the evolving landscape of blockchain technology in renewable energy. The study, based on a Scopus database search on 21 February 2024, reveals a growing trend in scholarly output, predominantly in engineering, energy, and computer science. The diverse range of source [...] Read more.
This paper investigates the evolving landscape of blockchain technology in renewable energy. The study, based on a Scopus database search on 21 February 2024, reveals a growing trend in scholarly output, predominantly in engineering, energy, and computer science. The diverse range of source types and global contributions, led by China, reflects the interdisciplinary nature of this field. This comprehensive review delves into 33 research papers, examining the integration of blockchain in renewable energy systems, encompassing decentralized power dispatching, certificate trading, alternative energy selection, and management in applications like intelligent transportation systems and microgrids. The papers employ theoretical concepts such as decentralized power dispatching models and permissioned blockchains, utilizing methodologies involving advanced algorithms, consensus mechanisms, and smart contracts to enhance efficiency, security, and transparency. The findings suggest that blockchain integration can reduce costs, increase renewable source utilization, and optimize energy management. Despite these advantages, challenges including uncertainties, privacy concerns, scalability issues, and energy consumption are identified, alongside legal and regulatory compliance and market acceptance hurdles. Overcoming resistance to change and building trust in blockchain-based systems are crucial for successful adoption, emphasizing the need for collaborative efforts among industry stakeholders, regulators, and technology developers to unlock the full potential of blockchains in renewable energy integration. Full article
Show Figures

Figure 1

25 pages, 1263 KiB  
Article
Cognitive Classifier of Hand Gesture Images for Automated Sign Language Recognition: Soft Robot Assistance Based on Neutrosophic Markov Chain Paradigm
by Muslem Al-Saidi, Áron Ballagi, Oday Ali Hassen and Saad M. Saad
Computers 2024, 13(4), 106; https://doi.org/10.3390/computers13040106 - 22 Apr 2024
Viewed by 312
Abstract
In recent years, Sign Language Recognition (SLR) has become an additional topic of discussion in the human–computer interface (HCI) field. The most significant difficulty confronting SLR recognition is finding algorithms that will scale effectively with a growing vocabulary size and a limited supply [...] Read more.
In recent years, Sign Language Recognition (SLR) has become an additional topic of discussion in the human–computer interface (HCI) field. The most significant difficulty confronting SLR recognition is finding algorithms that will scale effectively with a growing vocabulary size and a limited supply of training data for signer-independent applications. Due to its sensitivity to shape information, automated SLR based on hidden Markov models (HMMs) cannot characterize the confusing distributions of the observations in gesture features with sufficiently precise parameters. In order to simulate uncertainty in hypothesis spaces, many scholars provide an extension of the HMMs, utilizing higher-order fuzzy sets to generate interval-type-2 fuzzy HMMs. This expansion is helpful because it brings the uncertainty and fuzziness of conventional HMM mapping under control. The neutrosophic sets are used in this work to deal with indeterminacy in a practical SLR setting. Existing interval-type-2 fuzzy HMMs cannot consider uncertain information that includes indeterminacy. However, the neutrosophic hidden Markov model successfully identifies the best route between states when there is vagueness. This expansion is helpful because it brings the uncertainty and fuzziness of conventional HMM mapping under control. The neutrosophic three membership functions (truth, indeterminate, and falsity grades) provide more layers of autonomy for assessing HMM’s uncertainty. This approach could be helpful for an extensive vocabulary and hence seeks to solve the scalability issue. In addition, it may function independently of the signer, without needing data gloves or any other input devices. The experimental results demonstrate that the neutrosophic HMM is nearly as computationally difficult as the fuzzy HMM but has a similar performance and is more robust to gesture variations. Full article
(This article belongs to the Special Issue Uncertainty-Aware Artificial Intelligence)
Show Figures

Figure 1

23 pages, 812 KiB  
Review
Smart Healthcare System in Server-Less Environment: Concepts, Architecture, Challenges, Future Directions
by Rup Kumar Deka, Akash Ghosh, Sandeep Nanda, Rabindra Kumar Barik and Manob Jyoti Saikia
Computers 2024, 13(4), 105; https://doi.org/10.3390/computers13040105 - 19 Apr 2024
Viewed by 390
Abstract
Server-less computing is a novel cloud-based paradigm that is gaining popularity today for running widely distributed applications. When it comes to server-less computing, features are available via subscription. Server-less computing is advantageous to developers since it lets them install and run programs without [...] Read more.
Server-less computing is a novel cloud-based paradigm that is gaining popularity today for running widely distributed applications. When it comes to server-less computing, features are available via subscription. Server-less computing is advantageous to developers since it lets them install and run programs without worrying about the underlying architecture. A common choice for code deployment these days, server-less design is preferred because of its independence, affordability, and simplicity. The healthcare industry is one excellent setting in which server-less computing can shine. In the existing literature, we can see that fewer studies have been put forward or explored in the area of server-less computing with respect to smart healthcare systems. A cloud infrastructure can help deliver services to both users and healthcare providers. The main aim of our research is to cover various topics on the implementation of server-less computing in the current healthcare sector. We have carried out our studies, which are adopted in the healthcare domain and reported on an in-depth analysis in this article. We have listed various issues and challenges, and various recommendations to adopt server-less computing in the healthcare sector. Full article
Show Figures

Figure 1

16 pages, 4444 KiB  
Article
Using Privacy-Preserving Algorithms and Blockchain Tokens to Monetize Industrial Data in Digital Marketplaces
by Borja Bordel Sánchez, Ramón Alcarria, Latif Ladid and Aurel Machalek
Computers 2024, 13(4), 104; https://doi.org/10.3390/computers13040104 - 18 Apr 2024
Viewed by 434
Abstract
The data economy has arisen in most developed countries. Instruments and tools to extract knowledge and value from large collections of data are now available and enable new industries, business models, and jobs. However, the current data market is asymmetric and prevents companies [...] Read more.
The data economy has arisen in most developed countries. Instruments and tools to extract knowledge and value from large collections of data are now available and enable new industries, business models, and jobs. However, the current data market is asymmetric and prevents companies from competing fairly. On the one hand, only very specialized digital organizations can manage complex data technologies such as Artificial Intelligence and obtain great benefits from third-party data at a very reduced cost. On the other hand, datasets are produced by regular companies as valueless sub-products that assume great costs. These companies have no mechanisms to negotiate a fair distribution of the benefits derived from their industrial data, which are often transferred for free. Therefore, new digital data-driven marketplaces must be enabled to facilitate fair data trading among all industrial agents. In this paper, we propose a blockchain-enabled solution to monetize industrial data. Industries can upload their data to an Inter-Planetary File System (IPFS) using a web interface, where the data are randomized through a privacy-preserving algorithm. In parallel, a blockchain network creates a Non-Fungible Token (NFT) to represent the dataset. So, only the NFT owner can obtain the required seed to derandomize and extract all data from the IPFS. Data trading is then represented by NFT trading and is based on fungible tokens, so it is easier to adapt prices to the real economy. Auctions and purchases are also managed through a common web interface. Experimental validation based on a pilot deployment is conducted. The results show a significant improvement in the data transactions and quality of experience of industrial agents. Full article
Show Figures

Figure 1

24 pages, 20100 KiB  
Article
Continuous Authentication in the Digital Age: An Analysis of Reinforcement Learning and Behavioral Biometrics
by Priya Bansal and Abdelkader Ouda
Computers 2024, 13(4), 103; https://doi.org/10.3390/computers13040103 - 18 Apr 2024
Viewed by 260
Abstract
This research article delves into the development of a reinforcement learning (RL)-based continuous authentication system utilizing behavioral biometrics for user identification on computing devices. Keystroke dynamics are employed to capture unique behavioral biometric signatures, while a reward-driven RL model is deployed to authenticate [...] Read more.
This research article delves into the development of a reinforcement learning (RL)-based continuous authentication system utilizing behavioral biometrics for user identification on computing devices. Keystroke dynamics are employed to capture unique behavioral biometric signatures, while a reward-driven RL model is deployed to authenticate users throughout their sessions. The proposed system augments conventional authentication mechanisms, fortifying them with an additional layer of security to create a robust continuous authentication framework compatible with static authentication systems. The methodology entails training an RL model to discern atypical user typing patterns and identify potentially suspicious activities. Each user’s historical data are utilized to train an agent, which undergoes preprocessing to generate episodes for learning purposes. The environment involves the retrieval of observations, which are intentionally perturbed to facilitate learning of nonlinear behaviors. The observation vector encompasses both ongoing and summarized features. A binary and minimalist reward function is employed, with principal component analysis (PCA) utilized for encoding ongoing features, and the double deep Q-network (DDQN) algorithm implemented through a fully connected neural network serving as the policy net. Evaluation results showcase training accuracy and equal error rate (EER) ranging from 94.7% to 100% and 0 to 0.0126, respectively, while test accuracy and EER fall within the range of approximately 81.06% to 93.5% and 0.0323 to 0.11, respectively, for all users as encoder features increase in number. These outcomes are achieved through RL’s iterative refinement of rewards via trial and error, leading to enhanced accuracy over time as more data are processed and incorporated into the system. Full article
(This article belongs to the Special Issue Innovative Authentication Methods)
Show Figures

Figure 1

21 pages, 1359 KiB  
Article
A Holistic Approach to Use Educational Robots for Supporting Computer Science Courses
by Zhumaniyaz Mamatnabiyev, Christos Chronis, Iraklis Varlamis, Yassine Himeur and Meirambek Zhaparov
Computers 2024, 13(4), 102; https://doi.org/10.3390/computers13040102 - 17 Apr 2024
Viewed by 359
Abstract
Robots are intelligent machines that are capable of autonomously performing intricate sequences of actions, with their functionality being primarily driven by computer programs and machine learning models. Educational robots are specifically designed and used for teaching and learning purposes and attain the interest [...] Read more.
Robots are intelligent machines that are capable of autonomously performing intricate sequences of actions, with their functionality being primarily driven by computer programs and machine learning models. Educational robots are specifically designed and used for teaching and learning purposes and attain the interest of learners in gaining knowledge about science, technology, engineering, arts, and mathematics. Educational robots are widely applied in different fields of primary and secondary education, but their usage in teaching higher education subjects is limited. Even when educational robots are used in tertiary education, the use is sporadic, targets specific courses or subjects, and employs robots with narrow applicability. In this work, we propose a holistic approach to the use of educational robots in tertiary education. We demonstrate how an open source educational robot can be used by colleges, and universities in teaching multiple courses of a computer science curriculum, fostering computational and creative thinking in practice. We rely on an open-source and open design educational robot, called FOSSBot, which contains various IoT technologies for measuring data, processing it, and interacting with the physical world. Grace to its open nature, FOSSBot can be used in preparing the content and supporting learning activities for different subjects such as electronics, computer networks, artificial intelligence, computer vision, etc. To support our claim, we describe a computer science curriculum containing a wide range of computer science courses and explain how each course can be supported by providing indicative activities. The proposed one-year curriculum can be delivered at the postgraduate level, allowing computer science graduates to delve deep into Computer Science subjects. After examining related works that propose the use of robots in academic curricula we detect the gap that still exists for a curriculum that is linked to an educational robot and we present in detail each proposed course, the software libraries that can be employed for each course and the possible extensions to the open robot that will allow to further extend the curriculum with more topics or enhance it with activities. With our work, we show that by incorporating educational robots in higher education we can address this gap and provide a new ledger for boosting tertiary education. Full article
(This article belongs to the Special Issue Feature Papers in Computers 2024)
Show Figures

Figure 1

25 pages, 5122 KiB  
Article
Human Emotion Recognition Based on Spatio-Temporal Facial Features Using HOG-HOF and VGG-LSTM
by Hajar Chouhayebi, Mohamed Adnane Mahraz, Jamal Riffi, Hamid Tairi and Nawal Alioua
Computers 2024, 13(4), 101; https://doi.org/10.3390/computers13040101 - 16 Apr 2024
Viewed by 431
Abstract
Human emotion recognition is crucial in various technological domains, reflecting our growing reliance on technology. Facial expressions play a vital role in conveying and preserving human emotions. While deep learning has been successful in recognizing emotions in video sequences, it struggles to effectively [...] Read more.
Human emotion recognition is crucial in various technological domains, reflecting our growing reliance on technology. Facial expressions play a vital role in conveying and preserving human emotions. While deep learning has been successful in recognizing emotions in video sequences, it struggles to effectively model spatio-temporal interactions and identify salient features, limiting its accuracy. This research paper proposed an innovative algorithm for facial expression recognition which combined a deep learning algorithm and dynamic texture methods. In the initial phase of this study, facial features were extracted using the Visual-Geometry-Group (VGG19) model and input into Long-Short-Term-Memory (LSTM) cells to capture spatio-temporal information. Additionally, the HOG-HOF descriptor was utilized to extract dynamic features from video sequences, capturing changes in facial appearance over time. Combining these models using the Multimodal-Compact-Bilinear (MCB) model resulted in an effective descriptor vector. This vector was then classified using a Support Vector Machine (SVM) classifier, chosen for its simpler interpretability compared to deep learning models. This choice facilitates better understanding of the decision-making process behind emotion classification. In the experimental phase, the fusion method outperformed existing state-of-the-art methods on the eNTERFACE05 database, with an improvement margin of approximately 1%. In summary, the proposed approach exhibited superior accuracy and robust detection capabilities. Full article
Show Figures

Figure 1

26 pages, 4964 KiB  
Review
Digital Twin and 3D Digital Twin: Concepts, Applications, and Challenges in Industry 4.0 for Digital Twin
by April Lia Hananto, Andy Tirta, Safarudin Gazali Herawan, Muhammad Idris, Manzoore Elahi M. Soudagar, Djati Wibowo Djamari and Ibham Veza
Computers 2024, 13(4), 100; https://doi.org/10.3390/computers13040100 - 16 Apr 2024
Viewed by 583
Abstract
The rapid development of digitalization, the Internet of Things (IoT), and Industry 4.0 has led to the emergence of the digital twin concept. IoT is an important pillar of the digital twin. The digital twin serves as a crucial link, merging the physical [...] Read more.
The rapid development of digitalization, the Internet of Things (IoT), and Industry 4.0 has led to the emergence of the digital twin concept. IoT is an important pillar of the digital twin. The digital twin serves as a crucial link, merging the physical and digital territories of Industry 4.0. Digital twins are beneficial to numerous industries, providing the capability to perform advanced analytics, create detailed simulations, and facilitate informed decision-making that IoT supports. This paper presents a review of the literature on digital twins, discussing its concepts, definitions, frameworks, application methods, and challenges. The review spans various domains, including manufacturing, energy, agriculture, maintenance, construction, transportation, and smart cities in Industry 4.0. The present study suggests that the terminology “3 dimensional (3D) digital twin” is a more fitting descriptor for digital twin technology assisted by IoT. The aforementioned statement serves as the central argument of the study. This article advocates for a shift in terminology, replacing “digital twin” with “3D digital twin” to more accurately depict the technology’s innate potential and capabilities in Industry 4.0. We aim to establish that “3D digital twin” offers a more precise and holistic representation of the technology. By doing so, we underline the digital twin’s analytical ability and capacity to offer an intuitive understanding of systems, which can significantly streamline decision-making processes using the digital twin. Full article
(This article belongs to the Special Issue Artificial Intelligence in Industrial IoT Applications)
Show Figures

Figure 1

16 pages, 584 KiB  
Article
Detection of Deepfake Media Using a Hybrid CNN–RNN Model and Particle Swarm Optimization (PSO) Algorithm
by Aryaf Al-Adwan, Hadeel Alazzam, Noor Al-Anbaki and Eman Alduweib
Computers 2024, 13(4), 99; https://doi.org/10.3390/computers13040099 - 15 Apr 2024
Viewed by 415
Abstract
Deepfakes are digital audio, video, or images manipulated using machine learning algorithms. These manipulated media files can convincingly depict individuals doing or saying things they never actually did. Deepfakes pose significant risks to our lives, including national security, financial markets, and personal privacy. [...] Read more.
Deepfakes are digital audio, video, or images manipulated using machine learning algorithms. These manipulated media files can convincingly depict individuals doing or saying things they never actually did. Deepfakes pose significant risks to our lives, including national security, financial markets, and personal privacy. The ability to create convincing deep fakes can also harm individuals’ reputations and can be used to spread disinformation and fake news. As such, there is a growing need for reliable and accurate methods to detect deep fakes and prevent their harmful effects. In this paper, a hybrid convolutional neural network (CNN) and recurrent neural network (RNN) with a particle swarm optimization (PSO) algorithm is utilized to demonstrate a deep learning strategy for detecting deepfake videos. High accuracy, sensitivity, specificity, and F1 score were attained by the proposed approach when tested on two publicly available datasets: Celeb-DF and the Deepfake Detection Challenge Dataset (DFDC). Specifically, the proposed method achieved an average accuracy of 97.26% on Celeb-DF and an average accuracy of 94.2% on DFDC. The results were compared to other state-of-the-art methods and showed that the proposed method outperformed many. The proposed method can effectively detect deepfake videos, which is essential for identifying and preventing the spread of manipulated content online. Full article
Show Figures

Figure 1

14 pages, 1352 KiB  
Article
MTL-AraBERT: An Enhanced Multi-Task Learning Model for Arabic Aspect-Based Sentiment Analysis
by Arwa Fadel, Mostafa Saleh, Reda Salama and Osama Abulnaja
Computers 2024, 13(4), 98; https://doi.org/10.3390/computers13040098 - 15 Apr 2024
Viewed by 408
Abstract
Aspect-based sentiment analysis (ABSA) is a fine-grained type of sentiment analysis; it works on an aspect level. It mainly focuses on extracting aspect terms from text or reviews, categorizing the aspect terms, and classifying the sentiment polarities toward each aspect term and aspect [...] Read more.
Aspect-based sentiment analysis (ABSA) is a fine-grained type of sentiment analysis; it works on an aspect level. It mainly focuses on extracting aspect terms from text or reviews, categorizing the aspect terms, and classifying the sentiment polarities toward each aspect term and aspect category. Aspect term extraction (ATE) and aspect category detection (ACD) are interdependent and closely associated tasks. However, the majority of the current literature on Arabic aspect-based sentiment analysis (ABSA) deals with these tasks individually, assumes that aspect terms are already identified, or employs a pipeline model. Pipeline solutions employ single models for each task, where the output of the ATE model is utilized as the input for the ACD model. This sequential process can lead to the propagation of errors across different stages, as the performance of the ACD model is influenced by any errors produced by the ATE model. Therefore, the primary objective of this study was to investigate a multi-task learning approach based on transfer learning and transformers. We propose a multi-task learning model (MTL) that utilizes the pre-trained language model (AraBERT), namely, the MTL-AraBERT model, for extracting Arabic aspect terms and aspect categories simultaneously. Specifically, we focused on training a single model that simultaneously and jointly addressed both subtasks. Moreover, this paper also proposes a model integrating AraBERT, single pair classification, and BiLSTM/BiGRU that can be applied to aspect term polarity classification (APC) and aspect category polarity classification (ACPC). All proposed models were evaluated using the SemEval-2016 annotated dataset for the Arabic hotel dataset. The experiment results of the MTL model demonstrate that the proposed models achieved comparable or better performance than state-of-the-art works (F1-scores of 80.32% for the ATE and 68.21% for the ACD). The proposed SPC-BERT model demonstrated high accuracy, reaching 89.02% and 89.36 for APC and ACPC, respectively. These improvements hold significant potential for future research in Arabic ABSA. Full article
Show Figures

Figure 1

19 pages, 506 KiB  
Article
A Survey of Security Challenges in Cloud-Based SCADA Systems
by Arwa Wali and Fatimah Alshehry
Computers 2024, 13(4), 97; https://doi.org/10.3390/computers13040097 - 11 Apr 2024
Viewed by 540
Abstract
Supervisory control and data acquisition (SCADA) systems enable industrial organizations to control and monitor real-time data and industrial processes. Migrating SCADA systems to cloud environments can enhance the performance of traditional systems by improving storage capacity, reliability, and availability while reducing technical and [...] Read more.
Supervisory control and data acquisition (SCADA) systems enable industrial organizations to control and monitor real-time data and industrial processes. Migrating SCADA systems to cloud environments can enhance the performance of traditional systems by improving storage capacity, reliability, and availability while reducing technical and industrial costs. However, the increasing frequency of cloud cyberattacks poses a significant challenge to such systems. In addition, current research on cloud-based SCADA systems often focuses on a limited range of attack types, with findings scattered across various studies. This research comprehensively surveys the most common cybersecurity vulnerabilities and attacks facing cloud-based SCADA systems. It identifies four primary vulnerability factors: connectivity with cloud services, shared infrastructure, malicious insiders, and the security of SCADA protocols. This study categorizes cyberattacks targeting these systems into five main groups: hardware, software, communication and protocol-specific, control process, and insider attacks. In addition, this study proposes security solutions to mitigate the impact of cyberattacks on these control systems. Full article
Show Figures

Figure 1

13 pages, 662 KiB  
Systematic Review
The Use of Integrated Multichannel Records in Learning Studies in Higher Education: A Systematic Review of the Last 10 Years
by Irene González-Díez, Carmen Varela and María Consuelo Sáiz-Manzanares
Computers 2024, 13(4), 96; https://doi.org/10.3390/computers13040096 - 10 Apr 2024
Viewed by 383
Abstract
Neurophysiological measures have been used in the field of education to improve our knowledge about the cognitive processes underlying learning. Furthermore, the combined use of different neuropsychological measures has deepened our understanding of these processes. The main objective of this systematic review is [...] Read more.
Neurophysiological measures have been used in the field of education to improve our knowledge about the cognitive processes underlying learning. Furthermore, the combined use of different neuropsychological measures has deepened our understanding of these processes. The main objective of this systematic review is to provide a comprehensive picture of the use of integrated multichannel records in higher education. The bibliographic sources for the review were Web of Science, PsycINFO, Scopus, and Psicodoc databases. After a screening process by two independent reviewers, 10 articles were included according to prespecified inclusion criteria. In general, integrated recording of eye tracking and electroencephalograms were the most commonly used metrics, followed by integrated recording of eye tracking and electrodermal activity. Cognitive load was the most widely investigated learning-related cognitive process using integrated multichannel records. To date, most research has focused only on one neurophysiological measure. Furthermore, to our knowledge, no study has systematically investigated the use of integrated multichannel records in higher education. This systematic review provides a comprehensive picture of the current use of integrated multichannel records in higher education. Its findings may help design innovative educational programs, particularly in the online context. The findings provide a basis for future research and decision making regarding the use of integrated multichannel records in higher education. Full article
(This article belongs to the Special Issue Feature Papers in Computers 2024)
Show Figures

Figure 1

20 pages, 4835 KiB  
Article
Voltage and Reactive Power-Optimization Model for Active Distribution Networks Based on Second-Order Cone Algorithm
by Yaxuan Xu, Jihao Han, Zi Yin, Qingyang Liu, Chenxu Dai and Zhanlin Ji
Computers 2024, 13(4), 95; https://doi.org/10.3390/computers13040095 - 09 Apr 2024
Viewed by 439
Abstract
To address the challenges associated with wind power integration, this paper analyzes the impact of distributed renewable energy on the voltage of the distribution network. Taking into account the fast control of photovoltaic inverters and the unique characteristics of photovoltaic arrays, we establish [...] Read more.
To address the challenges associated with wind power integration, this paper analyzes the impact of distributed renewable energy on the voltage of the distribution network. Taking into account the fast control of photovoltaic inverters and the unique characteristics of photovoltaic arrays, we establish an active distribution network voltage reactive power-optimization model for planning the active distribution network. The model involves solving the original non-convex and non-linear power-flow-optimization problem. By introducing the second-order cone relaxation algorithm, we transform the model into a second-order cone programming model, making it easier to solve and yielding good results. The optimized parameters are then applied to the IEEE 33-node distribution system, where the phase angle of the node voltage is adjusted to optimize the reactive power of the entire power system, thereby demonstrating the effectiveness of utilizing a second-order cone programming algorithm for reactive power optimization in a comprehensive manner. Subsequently, active distribution network power quality control is implemented, resulting in a reduction in network loss from 0.41 MW to 0.02 MW. This reduces power loss rates, increases utilization efficiency by approximately 94%, optimizes power quality management, and ensures that users receive high-quality electrical energy. Full article
(This article belongs to the Special Issue Green Networking and Computing 2022)
Show Figures

Figure 1

20 pages, 4067 KiB  
Article
Toward Optimal Virtualization: An Updated Comparative Analysis of Docker and LXD Container Technologies
by Daniel Silva, João Rafael and Alexandre Fonte
Computers 2024, 13(4), 94; https://doi.org/10.3390/computers13040094 - 09 Apr 2024
Viewed by 481
Abstract
Traditional hypervisor-assisted virtualization is a leading virtualization technology in data centers, providing cost savings (CapEx and OpEx), high availability, and disaster recovery. However, its inherent overhead may hinder performance and seems not scale or be flexible enough for certain applications, such as microservices, [...] Read more.
Traditional hypervisor-assisted virtualization is a leading virtualization technology in data centers, providing cost savings (CapEx and OpEx), high availability, and disaster recovery. However, its inherent overhead may hinder performance and seems not scale or be flexible enough for certain applications, such as microservices, where deploying an application using a virtual machine is a longer and resource-intensive process. Container-based virtualization has received attention, especially with Docker, as an alternative, which also facilitates continuous integration/continuous deployment (CI/CD). Meanwhile, LXD has reactivated the interest in Linux LXC containers, which provides unique operations, including live migration and full OS emulation. A careful analysis of both options is crucial for organizations to decide which best suits their needs. This study revisits key concepts about containers, exposes the advantages and limitations of each container technology, and provides an up-to-date performance comparison between both types of containers (applicational vs. system). Using extensive benchmarks and well-known workload metrics such as CPU scores, disk speed, and network throughput, we assess their performance and quantify their virtualization overhead. Our results show a clear overall trend toward meritorious performance and the maturity of both technologies (Docker and LXD), with low overhead and scalable performance. Notably, LXD shows greater stability with consistent performance variability. Full article
Show Figures

Figure 1

25 pages, 2999 KiB  
Article
GFLASSO-LR: Logistic Regression with Generalized Fused LASSO for Gene Selection in High-Dimensional Cancer Classification
by Ahmed Bir-Jmel, Sidi Mohamed Douiri, Souad El Bernoussi, Ayyad Maafiri, Yassine Himeur, Shadi Atalla, Wathiq Mansoor and Hussain Al-Ahmad
Computers 2024, 13(4), 93; https://doi.org/10.3390/computers13040093 - 06 Apr 2024
Viewed by 676
Abstract
Advancements in genomic technologies have paved the way for significant breakthroughs in cancer diagnostics, with DNA microarray technology standing at the forefront of identifying genetic expressions associated with various cancer types. Despite its potential, the vast dimensionality of microarray data presents a formidable [...] Read more.
Advancements in genomic technologies have paved the way for significant breakthroughs in cancer diagnostics, with DNA microarray technology standing at the forefront of identifying genetic expressions associated with various cancer types. Despite its potential, the vast dimensionality of microarray data presents a formidable challenge, necessitating efficient dimension reduction and gene selection methods to accurately identify cancerous tumors. In response to this challenge, this study introduces an innovative strategy for microarray data dimension reduction and crucial gene set selection, aiming to enhance the accuracy of cancerous tumor identification. Leveraging DNA microarray technology, our method focuses on pinpointing significant genes implicated in tumor development, aiding the development of sophisticated computerized diagnostic tools. Our technique synergizes gene selection with classifier training within a logistic regression framework, utilizing a generalized Fused LASSO (GFLASSO-LR) regularizer. This regularization incorporates two penalties: one for selecting pertinent genes and another for emphasizing adjacent genes of importance to the target class, thus achieving an optimal trade-off between gene relevance and redundancy. The optimization challenge posed by our approach is tackled using a sub-gradient algorithm, designed to meet specific convergence prerequisites. We establish that our algorithm’s objective function is convex, Lipschitz continuous, and possesses a global minimum, ensuring reliability in the gene selection process. A numerical evaluation of the method’s parameters further substantiates its effectiveness. Experimental outcomes affirm the GFLASSO-LR methodology’s high efficiency in processing high-dimensional microarray data for cancer classification. It effectively identifies compact gene subsets, significantly enhancing classification performance and demonstrating its potential as a powerful tool in cancer research and diagnostics. Full article
(This article belongs to the Special Issue Machine and Deep Learning in the Health Domain 2024)
Show Figures

Figure 1

34 pages, 7324 KiB  
Article
The Explainability of Transformers: Current Status and Directions
by Paolo Fantozzi and Maurizio Naldi
Computers 2024, 13(4), 92; https://doi.org/10.3390/computers13040092 - 04 Apr 2024
Viewed by 658
Abstract
An increasing demand for model explainability has accompanied the widespread adoption of transformers in various fields of applications. In this paper, we conduct a survey of the existing literature on the explainability of transformers. We provide a taxonomy of methods based on the [...] Read more.
An increasing demand for model explainability has accompanied the widespread adoption of transformers in various fields of applications. In this paper, we conduct a survey of the existing literature on the explainability of transformers. We provide a taxonomy of methods based on the combination of transformer components that are leveraged to arrive at the explanation. For each method, we describe its mechanism and survey its applications. We find out that attention-based methods, both alone and in conjunction with activation-based and gradient-based methods, are the most employed ones. A growing attention is also devoted to the deployment of visualization techniques to help the explanation process. Full article
(This article belongs to the Special Issue Deep Learning and Explainable Artificial Intelligence)
Show Figures

Figure 1

23 pages, 2613 KiB  
Review
Study Trends and Core Content Trends of Research on Enhancing Computational Thinking: An Incorporated Bibliometric and Content Analysis Based on the Scopus Database
by Ling-Hsiu Chen and Ha Thi The Nguyen
Computers 2024, 13(4), 91; https://doi.org/10.3390/computers13040091 - 03 Apr 2024
Viewed by 484
Abstract
Over the last decade, research on evolving computational thinking (CT) has garnered heightened attention. Assessing the publication tendencies and nucleus contents of investigations on progressing CT to direct future research initiatives, develop policies, and integrate them into instructional materials is timely and exceedingly [...] Read more.
Over the last decade, research on evolving computational thinking (CT) has garnered heightened attention. Assessing the publication tendencies and nucleus contents of investigations on progressing CT to direct future research initiatives, develop policies, and integrate them into instructional materials is timely and exceedingly essential in education. Therefore, this research reviewed publications on progressing CT to identify research trends and core contents published in the Scopus database from 2008 to May 2022. For this reason, this study applied bibliometric and content analysis to 132 selected publications. After examining bibliometrics, the findings indicate a steady increase in publications related to game-based learning (GBL) and CT, reaching a peak in 2021, with the United States emerging as the most prolific contributor in terms of authors, institutions, and countries). The leading country in citations is primarily China. The document that received the most citations is Hsu’s 2018 paper on “Computers and Education”. Analysis of keywords and themes reveals core content tendencies, emphasizing teaching methods and attitudes aimed at improving CT via GBL. These results offer valuable insights for researchers and educators to inform their future work. However, future studies may benefit from including other databases such as Web of Science (WoS) and PubMed, employing alternative bibliometric software like VOSviewer or CiteSpace, as well as collecting data from June 2022. Full article
(This article belongs to the Special Issue Future Trends in Computer Programming Education)
Show Figures

Figure 1

13 pages, 762 KiB  
Article
Computer Vision Approach in Monitoring for Illicit and Copyrighted Objects in Digital Manufacturing
by Ihar Volkau, Sergei Krasovskii, Abdul Mujeeb and Helen Balinsky
Computers 2024, 13(4), 90; https://doi.org/10.3390/computers13040090 - 28 Mar 2024
Viewed by 585
Abstract
We propose a monitoring system for detecting illicit and copyrighted objects in digital manufacturing (DM). Our system is based on extracting and analyzing high-dimensional data from blueprints of three-dimensional (3D) objects. We aim to protect the legal interests of DM service providers, who [...] Read more.
We propose a monitoring system for detecting illicit and copyrighted objects in digital manufacturing (DM). Our system is based on extracting and analyzing high-dimensional data from blueprints of three-dimensional (3D) objects. We aim to protect the legal interests of DM service providers, who may receive requests for 3D printing from external sources, such as emails or uploads. Such requests may contain blueprints of objects that are illegal, restricted, or otherwise controlled in the country of operation or protected by copyright. Without a reliable way to identify such objects, the service provider may unknowingly violate the laws and regulations and face legal consequences. Therefore, we propose a multi-layer system that automatically detects and flags such objects before the 3D printing process begins. We present efficient computer vision algorithms for object analysis and scalable system architecture for data storage and processing and explain the rationale behind the suggested system architecture. Full article
(This article belongs to the Special Issue Advanced Image Processing and Computer Vision)
Show Figures

Figure 1

17 pages, 680 KiB  
Article
Lite2: A Schemaless Zero-Copy Serialization Format
by Tianyi Chen, Xiaotong Guan, Shi Shuai, Cuiting Huang and Michal Aibin
Computers 2024, 13(4), 89; https://doi.org/10.3390/computers13040089 - 28 Mar 2024
Viewed by 668
Abstract
In the field of data transmission and storage, serialization formats play a crucial role by converting complex data structures into a byte stream that can be easily stored, transmitted, and reconstructed. Despite the myriad available serialization formats, ranging from JSON to Protobuf, each [...] Read more.
In the field of data transmission and storage, serialization formats play a crucial role by converting complex data structures into a byte stream that can be easily stored, transmitted, and reconstructed. Despite the myriad available serialization formats, ranging from JSON to Protobuf, each has limitations, particularly in balancing schema flexibility, performance, and data copying overhead. This paper introduces Lite2, a novel data serialization format that addresses these challenges by combining schemaless flexibility with the efficiency of zero-copy operations for flat or key–value pair data types. Unlike traditional formats that often require a predefined schema and involve significant data copying during serialization and deserialization, Lite2 offers a dynamic schemaless approach that eliminates unnecessary data copying, optimizing system performance and efficiency. Built upon a contiguously stored B-tree structure, Lite2 enables efficient data lookup and modification without deserialization, thereby achieving zero-copy operations. Full article
Show Figures

Figure 1

15 pages, 595 KiB  
Article
A New Computational Algorithm for Assessing Overdispersion and Zero-Inflation in Machine Learning Count Models with Python
by Luiz Paulo Lopes Fávero, Alexandre Duarte and Helder Prado Santos
Computers 2024, 13(4), 88; https://doi.org/10.3390/computers13040088 - 27 Mar 2024
Viewed by 675
Abstract
This article provides an overview of count data and count models, explores zero inflation, introduces likelihood ratio tests, and explains how the Vuong test can be used as a model selection criterion for assessing overdispersion. The motivation of this work was to create [...] Read more.
This article provides an overview of count data and count models, explores zero inflation, introduces likelihood ratio tests, and explains how the Vuong test can be used as a model selection criterion for assessing overdispersion. The motivation of this work was to create a Vuong test implementation from scratch using the Python programming language. This implementation supports our objective of enhancing the accessibility and applicability of the Vuong test in real-world scenarios, providing a valuable contribution to the academic community, since Python did not have an implementation of this statistical test. Full article
Show Figures

Figure 1

16 pages, 1562 KiB  
Article
Evaluation of the Effectiveness of National Promotion Strategies for the Improvement of Privacy and Security
by Mauro Iacono and Michele Mastroianni
Computers 2024, 13(4), 87; https://doi.org/10.3390/computers13040087 - 27 Mar 2024
Viewed by 549
Abstract
Problems related to privacy and security preservation are in the scope of the concerns of governments and policymakers because of their impact on fundamental rights. Users are called to act responsibly whenever they are potentially exposed to related risks, but governments and parliaments [...] Read more.
Problems related to privacy and security preservation are in the scope of the concerns of governments and policymakers because of their impact on fundamental rights. Users are called to act responsibly whenever they are potentially exposed to related risks, but governments and parliaments must be proactive in creating safer conditions and a more appropriate regulation to both guide users towards good practices and create a favoring environment which reduces exposure. In this paper, we propose a modeling framework to define and evaluate policies which identify and use appropriate levers to accomplish these tasks. We present a proof-of-concept which shows the viability of estimating in advance the effects of policies and policymakers’ initiatives by means of Influence Nets. Full article
Show Figures

Figure 1

30 pages, 5007 KiB  
Article
Temporal-Logic-Based Testing Tool Architecture for Dual-Programming Model Systems
by Salwa Saad, Etimad Fadel, Ohoud Alzamzami, Fathy Eassa and Ahmed M. Alghamdi
Computers 2024, 13(4), 86; https://doi.org/10.3390/computers13040086 - 25 Mar 2024
Viewed by 708
Abstract
Today, various applications in different domains increasingly rely on high-performance computing (HPC) to accomplish computations swiftly. Integrating one or more programming models alongside the used programming language enhances system parallelism, thereby improving its performance. However, this integration can introduce runtime errors such as [...] Read more.
Today, various applications in different domains increasingly rely on high-performance computing (HPC) to accomplish computations swiftly. Integrating one or more programming models alongside the used programming language enhances system parallelism, thereby improving its performance. However, this integration can introduce runtime errors such as race conditions, deadlocks, or livelocks. Some of these errors may go undetected using conventional testing techniques, necessitating the exploration of additional methods for enhanced reliability. Formal methods, such as temporal logic, can be useful for detecting runtime errors since they have been widely used in real-time systems. Additionally, many software systems must adhere to temporal properties to ensure correct functionality. Temporal logics indeed serve as a formal frame that takes into account the temporal aspect when describing changes in elements or states over time. This paper proposes a temporal-logic-based testing tool utilizing instrumentation techniques designed for a dual-level programming model, namely, Message Passing Interface (MPI) and Open Multi-Processing (OpenMP), integrated with the C++ programming language. After a comprehensive study of temporal logic types, we found and proved that linear temporal logic is well suited as the foundation for our tool. Notably, while the tool is currently in development, our approach is poised to effectively address the highlighted examples of runtime errors by the proposed solution. This paper thoroughly explores various types and operators of temporal logic to inform the design of the testing tool based on temporal properties, aiming for a robust and reliable system. Full article
Show Figures

Figure 1

28 pages, 3424 KiB  
Review
A Qualitative and Comparative Performance Assessment of Logically Centralized SDN Controllers via Mininet Emulator
by Mohammad Nowsin Amin Sheikh, I-Shyan Hwang, Muhammad Saibtain Raza and Mohammad Syuhaimi Ab-Rahman
Computers 2024, 13(4), 85; https://doi.org/10.3390/computers13040085 - 25 Mar 2024
Viewed by 730
Abstract
An alternative networking approach called Software Defined Networking (SDN) enables dynamic, programmatically efficient network construction, hence enhancing network performance. It splits a traditional network into a centralized control plane and a configurable data plane. Because the core component overseeing every data plane action [...] Read more.
An alternative networking approach called Software Defined Networking (SDN) enables dynamic, programmatically efficient network construction, hence enhancing network performance. It splits a traditional network into a centralized control plane and a configurable data plane. Because the core component overseeing every data plane action is the controller in the control plane, which may contain one or more controllers and is thought of as the brains of the SDN network, controller functionality and performance are crucial to achieve optimal performances. There is much controller research available in the existing literature. Nevertheless, no qualitative comparison study of OpenFlow-enabled distributed but logically centralized controllers exists. This paper includes a quantitative investigation of the performance of several distributed but logically centralized SDN controllers in custom network scenarios using Mininet, as well as a thorough qualitative comparison of them. More precisely, we give a qualitative evaluation of their attributes and classify and categorize 13 distributed but logically centralized SDN controllers according to their capabilities. Additionally, we offer a comprehensive SDN emulation tool, called Mininet-based SDN controller performance assessment, in this study. Using six performance metrics—bandwidth, round-trip time, delay, jitter, packet loss, and throughput—this work also assesses five distributed but logically centralized controllers within two custom network scenarios (uniform and non-uniform host distribution). Our analysis reveals that the Ryu controller outperforms the OpenDayLight controller in terms of latency, packet loss, and round-trip time, while the OpenDayLight controller performs well in terms of throughput, bandwidth, and jitter. Throughout the entire experiment, the HyperFlow and ONOS controllers performed worst in all performance metrics. Finally, we discuss detailed research findings on performance. These experimental results provide decision-making guidelines when selecting a controller. Full article
Show Figures

Figure 1

21 pages, 6467 KiB  
Article
Architectural and Technological Approaches for Efficient Energy Management in Multicore Processors
by Claudiu Buduleci, Arpad Gellert, Adrian Florea and Remus Brad
Computers 2024, 13(4), 84; https://doi.org/10.3390/computers13040084 - 22 Mar 2024
Viewed by 752
Abstract
Benchmarks play an essential role in the performance evaluation of novel research concepts. Their effectiveness diminishes if they fail to exploit the available hardware of the evaluated microprocessor or, more broadly, if they are not consistent in comparing various systems. An empirical analysis [...] Read more.
Benchmarks play an essential role in the performance evaluation of novel research concepts. Their effectiveness diminishes if they fail to exploit the available hardware of the evaluated microprocessor or, more broadly, if they are not consistent in comparing various systems. An empirical analysis of the consecrated Splash-2 benchmarks suite vs. the latest version Splash-4 was performed. It was shown that on a 64-core configuration, half of the simulated benchmarks reach temperatures well beyond the critical threshold of 105 °C, emphasizing the necessity of a multi-objective evaluation from at least the following perspectives: energy consumption, performance, chip temperature, and integration area. During the analysis, it was observed that the cores spend a large amount of time in the idle state, around 45% on average in some configurations. This can be exploited by implementing a predictive dynamic voltage and frequency scaling (DVFS) technique called the Simple Core State Predictor (SCSP) to enhance the Intel Nehalem architecture and to simulate it using Sniper. The aim was to decrease the overall energy consumption by reducing power consumption at core level while maintaining the same performance. More than that, the SCSP technique, which operates with core-level abstract information, was applied in parallel with a Value Predictor (VP) or a Dynamic Instruction Reuse (DIR) technique, which rely on instruction-level information. Using the SCSP alone, a 9.95% reduction in power consumption and an energy reduction of 10.54% were achieved, maintaining the performance. By combining the SCSP with the VP technique, a performance increase of 8.87% was obtained while reducing power and energy consumption by 3.13% and 8.48%, respectively. Full article
(This article belongs to the Special Issue Green Networking and Computing 2022)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop