Feature Papers in Computer Science & Engineering

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (15 December 2023) | Viewed by 160920

Special Issue Editors

1. BISITE Research Group, University of Salamanca, 37007 Salamanca, Spain
2. Air Institute, IoT Digital Innovation Hub, 37188 Salamanca, Spain
3. Department of Electronics, Information and Communication, Faculty of Engineering, Osaka Institute of Technology, Osaka 535-8585, Japan
Interests: artificial intelligence; smart cities; smart grids
Special Issues, Collections and Topics in MDPI journals
School of Computer Sciences, Western Illinois University, Macomb, IL 61455, USA
Interests: service robots; IoT; social media; big data; metaverse
Special Issues, Collections and Topics in MDPI journals
Faculty of Engineering, Tokushima University, Tokushima 770-8501, Japan
Interests: language understanding and commnication; affective computing; computer science; intelligent robot; social computing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues, 

We are pleased to announce that the Section Computer Science and Engineering is compiling a collection of papers submitted by our Section’s Editorial Board members and leading scholars in this field of research. We welcome contributions as well as recommendations from Editorial Board members.

The aim of this Special Issue is to publish a set of papers that characterize the best original articles, including in-depth reviews of the state of the art and original and very up-to-date contributions involving the use of intelligent models and/or the IoT in sectors of interest. Anything that brings innovative elements and is related to Deeptech is welcome.  We hope that these articles will be widely read and have a great influence on the field. All articles in this Special Issue will be compiled in a print edition book after the deadline and will be appropriately promoted.

Topics of interest are all those involving advanced intelligent models and their applications in areas such as:

  • IoT and its applications
  • Industry 4.0
  • Smart cities
  • Biotechnology
  • Precision agriculture
  • Fintech
  • Quantum economy
  • Blockchain
  • Cybersecurity
  • Big data analytics and artificial intelligence

Prof. Dr. Juan M. Corchado
Prof. Dr. Byung-Gyu Kim
Dr. Carlos A. Iglesias
Prof. Dr. In Lee
Prof. Dr. Fuji Ren
Prof. Dr. Rashid Mehmood
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (74 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

18 pages, 675 KiB  
Article
A Metric Learning Perspective on the Implicit Feedback-Based Recommendation Data Imbalance Problem
Electronics 2024, 13(2), 419; https://doi.org/10.3390/electronics13020419 - 19 Jan 2024
Viewed by 526
Abstract
Paper recommendation systems are important for alleviating academic information overload. Such systems provide personalized recommendations based on implicit feedback from users, supplemented by their subject information, citation networks, etc. However, such recommender systems face problems like data sparsity for positive samples and uncertainty [...] Read more.
Paper recommendation systems are important for alleviating academic information overload. Such systems provide personalized recommendations based on implicit feedback from users, supplemented by their subject information, citation networks, etc. However, such recommender systems face problems like data sparsity for positive samples and uncertainty for negative samples. In this paper, we address these two issues and improve upon them from the perspective of metric learning. The algorithm is modeled as a push–pull loss function. For the positive sample pull-out operation, we introduce a context factor, which accelerates the convergence of the objective function through the multiplication rule to alleviate the data sparsity problem. For the negative sample push operation, we adopt an unbiased global negative sample method and use an intermediate matrix caching method to greatly reduce the computational complexity. Experimental results on two real datasets show that our method outperforms other baseline methods in terms of recommendation accuracy and computational efficiency. Moreover, our metric learning method that introduces context improves by more than 5% over the element-wise alternating least squares method. We demonstrate the potential of metric learning in addressing the problem of implicit feedback recommender systems with positive and negative sample imbalances. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

31 pages, 1740 KiB  
Article
Automated Over-the-Top Service Copyright Distribution Management System Using the Open Digital Rights Language
Electronics 2024, 13(2), 336; https://doi.org/10.3390/electronics13020336 - 12 Jan 2024
Viewed by 501
Abstract
As the demand and diversity of digital content increase, consumers now have simple and easy access to digital content through Over-the-Top (OTT) services. However, the rights of copyright holders remain unsecured due to issues with illegal copying and distribution of digital content, along [...] Read more.
As the demand and diversity of digital content increase, consumers now have simple and easy access to digital content through Over-the-Top (OTT) services. However, the rights of copyright holders remain unsecured due to issues with illegal copying and distribution of digital content, along with unclear practices in copyright royalty settlements and distributions. In response, this paper proposes an automated OTT service copyright distribution management system using the Open Digital Rights Language (ODRL) to safeguard the rights of copyright holders in the OTT service field. The proposed system ensures that the rights to exercise copyright transactions and agreements, such as trading of copyright, can only be carried out when all copyright holders of a single digital content agree based on the Threshold Schnorr Digital Signature. This approach takes into account multiple joint copyright holders, thereby safeguarding their rights. Furthermore, it ensures fair and transparent distribution of copyright royalties based on the ratio information outlined in ODRL. From the user’s perspective, the system not only provides services proactively based on the rights information specified in ODRL, but also employs zero-knowledge proof technology to handle sensitive information in OTT service copyright distribution, thereby addressing existing privacy concerns. This approach not only considers joint copyright holders, but also demonstrates its effectiveness in resolving prevalent issues in current OTT services, such as illegal digital content replication and distribution, and the unfair settlement and distribution of copyright royalties. Applying this proposed system to the existing OTT services and digital content market is expected to lead to the revitalization of the digital content trading market and the establishment of an OTT service environment that guarantees both vitality and reliability. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

19 pages, 2995 KiB  
Article
An Electrocardiogram Classification Using a Multiscale Convolutional Causal Attention Network
Electronics 2024, 13(2), 326; https://doi.org/10.3390/electronics13020326 - 12 Jan 2024
Viewed by 462
Abstract
Electrocardiograms (ECGs) play a pivotal role in the diagnosis and prediction of cardiovascular diseases (CVDs). However, traditional methods for ECG classification involve intricate signal processing steps, leading to high design costs. Addressing this concern, this study introduces the Multiscale Convolutional Causal Attention network [...] Read more.
Electrocardiograms (ECGs) play a pivotal role in the diagnosis and prediction of cardiovascular diseases (CVDs). However, traditional methods for ECG classification involve intricate signal processing steps, leading to high design costs. Addressing this concern, this study introduces the Multiscale Convolutional Causal Attention network (MSCANet), which utilizes a multiscale convolutional neural network combined with causal convolutional attention mechanisms for ECG signal classification from the PhysioNet MIT-BIH Arrhythmia database. Simultaneously, the dataset is balanced by downsampling the majority class and oversampling the minority class using the Synthetic Minority Oversampling Technique (SMOTE), effectively categorizing the five heartbeat types in the test dataset. The experimental results showcase the classifier’s performance, evaluated through accuracy, precision, sensitivity, and F1-score and culminating in an overall accuracy of 99.35%, precision of 96.55%, sensitivity of 96.73%, and an F1-recall of 96.63%, surpassing existing methods. Simultaneously, the application of this innovative data balancing technique significantly addresses the issue of data imbalance. Compared to the data before balancing, there was a significant improvement in accuracy for the S-class and the F-class, with increases of approximately 8% and 13%, respectively. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

30 pages, 7882 KiB  
Article
Unsupervised Multiview Fuzzy C-Means Clustering Algorithm
Electronics 2023, 12(21), 4467; https://doi.org/10.3390/electronics12214467 - 30 Oct 2023
Cited by 1 | Viewed by 777
Abstract
The rapid development in information technology makes it easier to collect vast numbers of data through the cloud, internet and other sources of information. Multiview clustering is a significant way for clustering multiview data that may come from multiple ways. The fuzzy c-means [...] Read more.
The rapid development in information technology makes it easier to collect vast numbers of data through the cloud, internet and other sources of information. Multiview clustering is a significant way for clustering multiview data that may come from multiple ways. The fuzzy c-means (FCM) algorithm for clustering (single-view) datasets was extended to process multiview datasets in the literature, called the multiview FCM (MV-FCM). However, most of the MV-FCM clustering algorithms and their extensions in the literature need prior information about the number of clusters and are also highly influenced by initializations. In this paper, we propose a novel MV-FCM clustering algorithm with an unsupervised learning framework, called the unsupervised MV-FCM (U-MV-FCM), such that it can search an optimal number of clusters during the iteration process of the algorithm without giving the number of clusters a priori. It is also free of initializations and parameter selection. We then use three synthetic and six benchmark datasets to make comparisons between the proposed U-MV-FCM and other existing algorithms and to highlight its practical implications. The experimental results show that our proposed U-MV-FCM algorithm is superior and more useful for clustering multiview datasets. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

31 pages, 13108 KiB  
Article
Speech Emotion Recognition Using Convolutional Neural Networks with Attention Mechanism
Electronics 2023, 12(20), 4376; https://doi.org/10.3390/electronics12204376 - 23 Oct 2023
Viewed by 1310
Abstract
Speech emotion recognition (SER) is an interesting and difficult problem to handle. In this paper, we deal with it through the implementation of deep learning networks. We have designed and implemented six different deep learning networks, a deep belief network (DBN), a simple [...] Read more.
Speech emotion recognition (SER) is an interesting and difficult problem to handle. In this paper, we deal with it through the implementation of deep learning networks. We have designed and implemented six different deep learning networks, a deep belief network (DBN), a simple deep neural network (SDNN), an LSTM network (LSTM), an LSTM network with the addition of an attention mechanism (LSTM-ATN), a convolutional neural network (CNN), and a convolutional neural network with the addition of an attention mechanism (CNN-ATN), having in mind, apart from solving the SER problem, to test the impact of the attention mechanism on the results. Dropout and batch normalization techniques are also used to improve the generalization ability (prevention of overfitting) of the models as well as to speed up the training process. The Surrey Audio–Visual Expressed Emotion (SAVEE) database and the Ryerson Audio–Visual Database (RAVDESS) were used for the training and evaluation of our models. The results showed that the networks with the addition of the attention mechanism did better than the others. Furthermore, they showed that the CNN-ATN was the best among the tested networks, achieving an accuracy of 74% for the SAVEE database and 77% for the RAVDESS, and exceeding existing state-of-the-art systems for the same datasets. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

16 pages, 6054 KiB  
Article
Enhancement of Product-Inspection Accuracy Using Convolutional Neural Network and Laplacian Filter to Automate Industrial Manufacturing Processes
Electronics 2023, 12(18), 3795; https://doi.org/10.3390/electronics12183795 - 07 Sep 2023
Viewed by 662
Abstract
The automation of the manufacturing process of printed circuit boards (PCBs) requires accurate PCB inspections, which in turn require clear images that accurately represent the product PCBs. However, if low-quality images are captured during the involved image-capturing process, accurate PCB inspections cannot be [...] Read more.
The automation of the manufacturing process of printed circuit boards (PCBs) requires accurate PCB inspections, which in turn require clear images that accurately represent the product PCBs. However, if low-quality images are captured during the involved image-capturing process, accurate PCB inspections cannot be guaranteed. Therefore, this study proposes a method to effectively detect defective images for PCB inspection. This method involves using a convolutional neural network (CNN) and a Laplacian filter to achieve a higher accuracy of the classification of the obtained images as normal and defective images than that obtained using existing methods, with the results showing an improvement of 11.87%. Notably, the classification accuracy obtained using both a CNN and Laplacian filter is higher than that obtained using only CNNs. Furthermore, applying the proposed method to images of computer components other than PCBs results in a 5.2% increase in classification accuracy compared with only using CNNs. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

19 pages, 664 KiB  
Article
Collaborative Mixture-of-Experts Model for Multi-Domain Fake News Detection
Electronics 2023, 12(16), 3440; https://doi.org/10.3390/electronics12163440 - 14 Aug 2023
Cited by 2 | Viewed by 1097
Abstract
With the widespread popularity of online social media, people have come to increasingly rely on it as an information and news source. However, the growing spread of fake news on the Internet has become a serious threat to cyberspace and society at large. [...] Read more.
With the widespread popularity of online social media, people have come to increasingly rely on it as an information and news source. However, the growing spread of fake news on the Internet has become a serious threat to cyberspace and society at large. Although a series of previous works have proposed various methods for the detection of fake news, most of these methods focus on single-domain fake-news detection, resulting in poor detection performance when considering real-world fake news with diverse news topics. Furthermore, any news content may belong to multiple domains. Therefore, detecting multi-domain fake news remains a challenging problem. In this study, we propose a multi-domain fake-news detection framework based on a mixture-of-experts model. The input text is fed to BertTokenizer and embeddings are obtained by jointly calling CLIP to obtain the fusion features. This avoids the introduction of noise and redundant features during feature fusion. We also propose a collaboration module, in which a sentiment module is used to analyze the inherent sentimental information of the text, and sentence-level and domain embeddings are used to form the collaboration module. This module can adaptively determine the weights of the expert models. Finally, the mixture-of-experts model, composed of TextCNN, is used to learn the features and construct a high-performance fake-news detection model. We conduct extensive experiments on the Weibo21 dataset, the results of which indicate that our multi-domain methods perform well, in comparison with baseline methods, on the Weibo21 dataset. Our proposed framework presents greatly improved multi-domain fake-news detection performance. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

23 pages, 6530 KiB  
Article
Deep-Learning-Based Natural Ventilation Rate Prediction with Auxiliary Data in Mismeasurement Sensing Environments
Electronics 2023, 12(15), 3294; https://doi.org/10.3390/electronics12153294 - 31 Jul 2023
Viewed by 727
Abstract
Predicting the amount of natural ventilation by utilizing environmental data such as differential pressure, wind, temperature, and humidity with IoT sensing is an important issue for optimal HVAC control to maintain comfortable air quality. Recently, some research has been conducted using deep learning [...] Read more.
Predicting the amount of natural ventilation by utilizing environmental data such as differential pressure, wind, temperature, and humidity with IoT sensing is an important issue for optimal HVAC control to maintain comfortable air quality. Recently, some research has been conducted using deep learning to provide high accuracy in natural ventilation prediction. Therefore, high reliability of IoT sensing data is required to achieve predictions successfully. However, it is practically difficult to predict the accurate NVR in a mismeasurement sensing environment, since inaccurate IoT sensing data are collected, for example, due to sensor malfunction. Therefore, we need a way to provide high deep-learning-based NVR prediction accuracy in mismeasurement sensing environments. In this study, to overcome the degradation of accuracy due to mismeasurement, we use complementary auxiliary data generated by semi-supervised learning and selected by importance analysis. That is, the NVR prediction model is reliably trained by generating and selecting auxiliary data, and then the natural ventilation is predicted with the integration of mismeasurement and auxiliary by bagging-based ensemble approach. Based on the experimental results, we confirmed that the proposed method improved the natural ventilation rate prediction accuracy by 25% compared with the baseline approach. In the context of deep-learning-based natural ventilation prediction using various IoT sensing data, we address the issue of realistic mismeasurement by generating auxiliary data that utilize the rapidly changing or slowly changing characteristics of the sensing data, which can improve the reliability of observation data. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

20 pages, 3362 KiB  
Article
Autonomous Drone Electronics Amplified with Pontryagin-Based Optimization
Electronics 2023, 12(11), 2541; https://doi.org/10.3390/electronics12112541 - 05 Jun 2023
Cited by 1 | Viewed by 971
Abstract
In the era of electrification and artificial intelligence, direct current motors are widely utilized with numerous innovative adaptive and learning methods. Traditional methods utilize model-based algebraic techniques with system identification, such as recursive least squares, extended least squares, and autoregressive moving averages. The [...] Read more.
In the era of electrification and artificial intelligence, direct current motors are widely utilized with numerous innovative adaptive and learning methods. Traditional methods utilize model-based algebraic techniques with system identification, such as recursive least squares, extended least squares, and autoregressive moving averages. The new method known as deterministic artificial intelligence employs physical-based process dynamics to achieve target trajectory tracking. There are two common autonomous trajectory-generation algorithms: sinusoidal function- and Pontryagin-based generation algorithms. The Pontryagin-based optimal trajectory with deterministic artificial intelligence for DC motors is proposed and its performance compared for the first time in this paper. This paper aims to simulate model following and deterministic artificial intelligence methods using the sinusoidal and Pontryagin methods and to compare the differences in their performance when following the challenging step function slew maneuver. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

14 pages, 374 KiB  
Article
Intelligent Detection of Cryptographic Misuse in Android Applications Based on Program Slicing andTransformer-Based Classifier
Electronics 2023, 12(11), 2460; https://doi.org/10.3390/electronics12112460 - 30 May 2023
Viewed by 920
Abstract
The utilization of cryptography in applications has assumed paramount importance with the escalating security standards for Android applications. The adept utilization of cryptographic APIs can significantly enhance application security; however, in practice, software developers frequently misuse these APIs due to their inadequate grasp [...] Read more.
The utilization of cryptography in applications has assumed paramount importance with the escalating security standards for Android applications. The adept utilization of cryptographic APIs can significantly enhance application security; however, in practice, software developers frequently misuse these APIs due to their inadequate grasp of cryptography. A study reveals that a staggering 88% of Android applications exhibit some form of cryptographic misuse. Although certain tools have been proposed to detect such misuse, most of them rely on manually devised rules which are susceptible to errors and require researchers possessing an exhaustive comprehension of cryptography. In this study, we propose a research methodology founded on a neural network model to pinpoint code related to cryptography by employing program slices as a dataset. We subsequently employ active learning, rooted in clustering, to select the portion of the data harboring security issues for annotation in accordance with the Android cryptography usage guidelines. Ultimately, we feed the dataset into a transformer and multilayer perceptron (MLP) to derive the classification outcome. Comparative experiments are also conducted to assess the model’s efficacy in comparison to other existing approaches. Furthermore, planned combination tests utilizing supplementary techniques aim to validate the model’s generalizability. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

15 pages, 4882 KiB  
Article
Design of Enhanced Document HTML and the Reliable Electronic Document Distribution Service
Electronics 2023, 12(10), 2176; https://doi.org/10.3390/electronics12102176 - 10 May 2023
Viewed by 897
Abstract
Electronic documents are becoming increasingly popular in various industries and sectors as they provide greater convenience and cost-efficiency than physical documents. PDF is a widely used format for creating and sharing electronic documents, while HTML is commonly used in mobile environments as the [...] Read more.
Electronic documents are becoming increasingly popular in various industries and sectors as they provide greater convenience and cost-efficiency than physical documents. PDF is a widely used format for creating and sharing electronic documents, while HTML is commonly used in mobile environments as the foundation for creating web pages displayed on mobile devices, such as smartphones and tablets. HTML is becoming a more critical document format as mobile environments have been raised as the primary communication channel nowadays. However, HTML does not have the standard content integrity feature, and an electronic document based on HTML consists of a set of related files. Therefore, it has a vulnerability in terms of reliable electronic documents. We have proposed Document HTML, a single independent file with extended meta tags, to be a reliable electronic document and Chained Document, a single independent file with a blockchain network to secure content integrity and delivery assurance. In this paper, we improved the definition of Document HTML and researched certified electronic document intermediaries. Additionally, we designed and validated the electronic document distribution service using Enhanced Document HTML for real usability. Moreover, we conducted experimental verification using a tax notification electronic document, which has one of the top distribution volumes in Korea, to confirm how Document HTML provides a content integrity verification feature. Document HTML can be used in an enterprise that must send a reliable electronic document to a customer with an electronic document delivery service provider. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

13 pages, 444 KiB  
Article
BSTC: A Fake Review Detection Model Based on a Pre-Trained Language Model and Convolutional Neural Network
Electronics 2023, 12(10), 2165; https://doi.org/10.3390/electronics12102165 - 09 May 2023
Cited by 2 | Viewed by 2501
Abstract
Detecting fake reviews can help customers make better purchasing decisions and maintain a positive online business environment. In recent years, pre-trained language models have significantly improved the performance of natural language processing tasks. These models are able to generate different representation vectors for [...] Read more.
Detecting fake reviews can help customers make better purchasing decisions and maintain a positive online business environment. In recent years, pre-trained language models have significantly improved the performance of natural language processing tasks. These models are able to generate different representation vectors for each word in different contexts, thus solving the challenge of multiple meanings of a word, which traditional word vector methods such as Word2Vec cannot solve, and, therefore, better capturing the text’s contextual information. In addition, we consider that reviews generally contain rich opinion and sentiment expressions, while most pre-trained language models, including BERT, lack the consideration of sentiment knowledge in the pre-training stage. Based on the above considerations, we propose a new fake review detection model based on a pre-trained language model and convolutional neural network, which is called BSTC. BSTC considers BERT, SKEP, and TextCNN, where SKEP is a pre-trained language model based on sentiment knowledge enhancement. We conducted a series of experiments on three gold-standard datasets, and the findings illustrate that BSTC outperforms state-of-the-art methods in detecting fake reviews. It achieved the highest accuracy on all three gold-standard datasets—Hotel, Restaurant, and Doctor—with 93.44%, 91.25%, and 92.86%, respectively. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

24 pages, 669 KiB  
Article
Complement Recognition-Based Formal Concept Analysis for Automatic Extraction of Interpretable Concept Taxonomies from Text
Electronics 2023, 12(9), 2137; https://doi.org/10.3390/electronics12092137 - 07 May 2023
Viewed by 961
Abstract
The increasing scale and pace of the production of digital documents have generated a need for automatic tools to analyze documents and extract underlying concepts and knowledge in order to help humans manage information overload. Specifically, since most information comes in the form [...] Read more.
The increasing scale and pace of the production of digital documents have generated a need for automatic tools to analyze documents and extract underlying concepts and knowledge in order to help humans manage information overload. Specifically, since most information comes in the form of text, natural language processing tools are needed that are able to analyze the sentences and transform them into an internal representation that can be handled by computers to perform inferences and reasoning. In turn, these tools often work based on linguistic resources for the various levels of analysis (morphological, lexical, syntactic and semantic). The resources are language (and sometimes even domain) specific and typically must be manually produced by human experts, increasing their cost and limiting their availability. Especially relevant are concept taxonomies, which allow us to properly interpret the textual content of documents. This paper presents an intelligent module to extract relevant domain knowledge from free text by means of Concept Hierarchy Extraction techniques. In particular, the underlying model is provided using Formal Concept Analysis, while a crucial role is played by an expert system for language analysis that can recognize different types of indirect objects (a component very rich in information) in English. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

18 pages, 746 KiB  
Article
MazeGen: A Low-Code Framework for Bootstrapping Robotic Navigation Scenarios for Smart Manufacturing Contexts
Electronics 2023, 12(9), 2058; https://doi.org/10.3390/electronics12092058 - 29 Apr 2023
Viewed by 854
Abstract
In this research, we describe the MazeGen framework (as a maze generator), which generates navigation scenarios using Grammatical Evolution for robots or drones to navigate. The maze generator uses evolutionary algorithms to create robotic navigation scenarios with different semantic levels along a scenario [...] Read more.
In this research, we describe the MazeGen framework (as a maze generator), which generates navigation scenarios using Grammatical Evolution for robots or drones to navigate. The maze generator uses evolutionary algorithms to create robotic navigation scenarios with different semantic levels along a scenario profile. Grammatical Evolution is a Machine Learning technique from the Evolutionary Computing branch that uses a BNF grammar to describe the language of the possible scenario universe and a numerical encoding of individual scenarios along that grammar. Through a mapping process, it converts new numerical individuals obtained by operations on the parents’ encodings to a new solution by means of grammar. In this context, the grammar describes the scenario elements and some composition rules. We also analyze associated concepts of complexity, understanding complexity as the cost of production of the scenario and skill levels needed to move around the maze. Preliminary results and statistics evidence a low correlation between complexity and the number of obstacles placed, as configurations with more difficult obstacle dispositions were found in the early stages of the evolution process and also when analyzing mazes taking into account their semantic meaning, earlier versions of the experiment not only resulted as too simplistic for the Smart Manufacturing domain, but also lacked correlation with possible real-world scenarios, as was evidenced in our experiments, where the most semantic meaning results had the lowest fitness score. They also show the emerging technology status of this approach, as we still need to find out how to reliably find solvable scenarios and characterize those belonging to the same class of equivalence. Despite being an emerging technology, MazeGen allows users to simplify the process of building configurations for smart manufacturing environments, by making it faster, more efficient, and reproducible, and it also puts the non-expert programmer in the center of the development process, as little boilerplate code is needed. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

24 pages, 800 KiB  
Article
Clustered Federated Learning Based on Momentum Gradient Descent for Heterogeneous Data
Electronics 2023, 12(9), 1972; https://doi.org/10.3390/electronics12091972 - 24 Apr 2023
Cited by 1 | Viewed by 1106
Abstract
Data heterogeneity may significantly deteriorate the performance of federated learning since the client’s data distribution is divergent. To mitigate this issue, an effective method is to partition these clients into suitable clusters. However, existing clustered federated learning is only based on the gradient [...] Read more.
Data heterogeneity may significantly deteriorate the performance of federated learning since the client’s data distribution is divergent. To mitigate this issue, an effective method is to partition these clients into suitable clusters. However, existing clustered federated learning is only based on the gradient descent method, which leads to poor convergence performance. To accelerate the convergence rate, this paper proposes clustered federated learning based on momentum gradient descent (CFL-MGD) by integrating momentum and cluster techniques. In CFL-MGD, scattered clients are partitioned into the same cluster when they have the same learning tasks. Meanwhile, each client in the same cluster utilizes their own private data to update local model parameters through the momentum gradient descent. Moreover, we present gradient averaging and model averaging for global aggregation, respectively. To understand the proposed algorithm, we also prove that CFL-MGD converges at an exponential rate for smooth and strongly convex loss functions. Finally, we validate the effectiveness of CFL-MGD on CIFAR-10 and MNIST datasets. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

21 pages, 993 KiB  
Article
Activity Recognition in Smart Homes via Feature-Rich Visual Extraction of Locomotion Traces
Electronics 2023, 12(9), 1969; https://doi.org/10.3390/electronics12091969 - 24 Apr 2023
Cited by 2 | Viewed by 1405
Abstract
The proliferation of sensors in smart homes makes it possible to monitor human activities, routines, and complex behaviors in an unprecedented way. Hence, human activity recognition has gained increasing attention over the last few years as a tool to improve healthcare and well-being [...] Read more.
The proliferation of sensors in smart homes makes it possible to monitor human activities, routines, and complex behaviors in an unprecedented way. Hence, human activity recognition has gained increasing attention over the last few years as a tool to improve healthcare and well-being in several applications. However, most existing activity recognition systems rely on cameras or wearable sensors, which may be obtrusive and may invade the user’s privacy, especially at home. Moreover, extracting expressive features from a stream of data provided by heterogeneous smart-home sensors is still an open challenge. In this paper, we investigate a novel method to detect activities of daily living by exploiting unobtrusive smart-home sensors (i.e., passive infrared position sensors and sensors attached to everyday objects) and vision-based deep learning algorithms, without the use of cameras or wearable sensors. Our method relies on depicting the locomotion traces of the user and visual clues about their interaction with objects on a floor plan map of the home, and utilizes pre-trained deep convolutional neural networks to extract features for recognizing ongoing activity. One additional advantage of our method is its seamless extendibility with additional features based on the available sensor data. Extensive experiments with a real-world dataset and a comparison with state-of-the-art approaches demonstrate the effectiveness of our method. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

27 pages, 517 KiB  
Article
A Secure and Anonymous Authentication Protocol Based on Three-Factor Wireless Medical Sensor Networks
Electronics 2023, 12(6), 1368; https://doi.org/10.3390/electronics12061368 - 13 Mar 2023
Cited by 5 | Viewed by 1337
Abstract
Wireless medical sensor networks (WMSNs), a type of wireless sensor network (WSN), have enabled medical professionals to identify patients’ health information in real time to identify and diagnose their conditions. However, since wireless communication is performed through an open channel, an attacker can [...] Read more.
Wireless medical sensor networks (WMSNs), a type of wireless sensor network (WSN), have enabled medical professionals to identify patients’ health information in real time to identify and diagnose their conditions. However, since wireless communication is performed through an open channel, an attacker can steal or manipulate the transmitted and received information. Because these attacks are directly related to the patients’ lives, it is necessary to prevent these attacks upfront by providing the security of WMSN communication. Although authentication protocols are continuously developed to establish the security of WMSN communication, they are still vulnerable to attacks. Recently, Yuanbing et al. proposed a secure authentication scheme for WMSN. They emphasized that their protocol is able to resist various attacks and can ensure mutual authentication. Unfortunately, this paper demonstrates that Yuanbing et al.’s protocol is vulnerable to smart card stolen attacks, ID/password guessing attacks, and sensor node capture attacks. In order to overcome the weaknesses and effectiveness of existing studies and to ensure secure communication and user anonymity of WMSN, we propose a secure and anonymous authentication protocol. The proposed protocol can prevent sensor capture, guessing, and man-in-the-middle attacks. To demonstrate the security of the proposed protocol, we perform various formal and informal analyses using AVISPA tools, ROR models, and BAN logic. Additionally, we compare the security aspects with related protocols to prove that the proposed protocol has excellent security. We also prove the effectiveness of our proposed protocol compared with related protocols in computation and communication costs. Our protocol has low or comparable computation and communication costs compared to related protocols. Thus, our protocol can provide services in the WMSN environment. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

14 pages, 485 KiB  
Article
Improving the Performance of Open-Set Recognition with Generated Fake Data
Electronics 2023, 12(6), 1311; https://doi.org/10.3390/electronics12061311 - 09 Mar 2023
Cited by 1 | Viewed by 947
Abstract
Open-set recognition models, in addition to generalizing to unseen instances of known categories, have to identify samples of unknown classes during the training phase. The main reason the latter is much more complicated is that there is very little or no information about [...] Read more.
Open-set recognition models, in addition to generalizing to unseen instances of known categories, have to identify samples of unknown classes during the training phase. The main reason the latter is much more complicated is that there is very little or no information about the properties of these unknown classes. There are methodologies available to handle the unknowns. One possible method is to construct models for them by using generated inputs labeled as unknown. Generative adversarial networks are frequently deployed to generate synthetic samples representing unknown classes to create better models for known classes. In this paper, we introduce a novel approach to improve the accuracy of recognition methods while reducing the time complexity. Instead of generating synthetic input data to train neural networks, feature vectors are generated using the output of a hidden layer. This approach results in a less complex structure for the neural network representation of the classes. A distance-based classifier implemented by a convolutional neural network is used in our implementation. Our solution’s open-set detection performance reaches an AUC value of 0.839 on the CIFAR-10 dataset, while the closed-set accuracy is 91.4%, the highest among the open-set recognition methods. The generator and discriminator networks are much smaller when generating synthetic inner features. There is no need to run these samples through the first part of the classifier with the convolutional layers. Hence, this solution not only gives better performance than generating samples in the input space but also makes it less expensive in terms of computational complexity. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

17 pages, 2372 KiB  
Article
RESTful API Analysis, Recommendation, and Client Code Retrieval
Electronics 2023, 12(5), 1252; https://doi.org/10.3390/electronics12051252 - 05 Mar 2023
Viewed by 1990
Abstract
Numerous companies create innovative software systems using Web APIs (Application Programming Interfaces). API search engines and API directory services, such as ProgrammableWeb, Rapid API Hub, APIs.guru, and API Harmony, have been developed to facilitate the utilization of various APIs. Unfortunately, most API systems [...] Read more.
Numerous companies create innovative software systems using Web APIs (Application Programming Interfaces). API search engines and API directory services, such as ProgrammableWeb, Rapid API Hub, APIs.guru, and API Harmony, have been developed to facilitate the utilization of various APIs. Unfortunately, most API systems provide only superficial support, with no assistance in obtaining relevant APIs or examples of code usage. To better realize the “FAIR” (Findability, Accessibility, Interoperability, and Reusability) features for the usage of Web APIs, in this study, we developed an API inspection system (referred to as API Prober) to provide a new API directory service with multiple supplemental functionalities. To facilitate the findability and accessibility of APIs, API Prober transforms OAS (OpenAPI Specifications) into a graph structure and automatically annotates the semantic concepts using LDA (Latent Dirichlet Allocation) and WordNet. To enhance interoperability, API Prober also classifies APIs by clustering OAS documents and recommends alternative services to be substituted or merged with the target service. Finally, to support reusability, API Prober makes it possible to retrieve examples of API utilization code in Java by parsing source code in GitHub. The experimental results demonstrate the effectiveness of the API Prober in recommending relevant services and providing usage examples based on real-world client code. This research contributes to providing viable methods to appropriately analyze and cluster Web APIs, and recommend APIs and client code examples. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

34 pages, 873 KiB  
Article
Applying Social Network Analysis to Model and Handle a Cross-Blockchain Ecosystem
Electronics 2023, 12(5), 1086; https://doi.org/10.3390/electronics12051086 - 22 Feb 2023
Cited by 4 | Viewed by 1332
Abstract
In recent years, the huge growth in the number and variety of blockchains has prompted researchers to investigate the cross-blockchain scenario. In this setting, multiple blockchains coexist, and wallets can exchange data and money from one blockchain to another. The effective and efficient [...] Read more.
In recent years, the huge growth in the number and variety of blockchains has prompted researchers to investigate the cross-blockchain scenario. In this setting, multiple blockchains coexist, and wallets can exchange data and money from one blockchain to another. The effective and efficient management of a cross-blockchain ecosystem is an open problem. This paper aims to address it by exploiting the potential of Social Network Analysis. This general objective is declined into a set of activities. First, a social network-based model is proposed to represent such a scenario. Then, a multi-dimensional and multi-view framework is presented, which uses such a model to handle a cross-blockchain scenario. Such a framework allows all the results found in the past research on Social Network Analysis to be applied to the cross-blockchain ecosystem. Afterwards, this framework is used to extract insights and knowledge patterns concerning the behavior of several categories of wallets in a cross-blockchain scenario. To verify the goodness of the proposed framework, it is applied on a real dataset derived from Multichain, in order to identify various user categories and their “modus operandi”. Finally, a new centrality measure is proposed, which identifies the most significant wallets in the ecosytem. This measure considers several viewpoints, each of which addresses a specific aspect that may make a wallet more or less central in the cross-blockchain scenario. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

16 pages, 1042 KiB  
Article
An Empirical Study of Segmented Linear Regression Search in LevelDB
Electronics 2023, 12(4), 1018; https://doi.org/10.3390/electronics12041018 - 17 Feb 2023
Cited by 3 | Viewed by 1500
Abstract
The purpose of this paper is proposing a novel search mechanism, called SLR (Segmented Linear Regression) search, based on the concept of learned index. It is motivated by our observation that a lot of big data, collected and used by previous studies, have [...] Read more.
The purpose of this paper is proposing a novel search mechanism, called SLR (Segmented Linear Regression) search, based on the concept of learned index. It is motivated by our observation that a lot of big data, collected and used by previous studies, have a linearity property, meaning that keys and their stored locations show a strong linear correlation. This observation leads us to design SLR search where we apply segmentation into the well-known machine learning algorithm, linear regression, for identifying a location from a given key. We devise two segmentation techniques, equal-size and error-aware, with the consideration of both prediction accuracy and segmentation overhead. We implement our proposal in LevelDB, Google’s key-value store, and verify that it can improve search performance by up to 12.7%. In addition, we find that the equal-size technique provides efficiency in training while the error-aware one is tolerable to noisy data. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

14 pages, 2091 KiB  
Article
2D Camera-Based Air-Writing Recognition Using Hand Pose Estimation and Hybrid Deep Learning Model
Electronics 2023, 12(4), 995; https://doi.org/10.3390/electronics12040995 - 16 Feb 2023
Cited by 2 | Viewed by 2540
Abstract
Air-writing is a modern human–computer interaction technology that allows participants to write words or letters with finger or hand movements in free space in a simple and intuitive manner. Air-writing recognition is a particular case of gesture recognition in which gestures can be [...] Read more.
Air-writing is a modern human–computer interaction technology that allows participants to write words or letters with finger or hand movements in free space in a simple and intuitive manner. Air-writing recognition is a particular case of gesture recognition in which gestures can be matched to write characters and digits in the air. Air-written characters show extensive variations depending on the various writing styles of participants and their speed of articulation, which presents quite a difficult task for effective character recognition. In order to solve these difficulties, this current work proposes an air-writing system using a web camera. The proposed system consists of two parts: alphabetic recognition and digit recognition. In order to assess our proposed system, two character datasets were used: an alphabetic dataset and a numeric dataset. We collected samples from 17 participants and asked each participant to write alphabetic characters (A to Z) and numeric digits (0 to 9) about 5–10 times. At the same time, we recorded the position of the fingertips using MediaPipe. As a result, we collected 3166 samples for the alphabetic dataset and 1212 samples for the digit dataset. First, we preprocessed the dataset and then created two datasets: image data and padding sequential data. The image data were fed into the convolution neural networks (CNN) model, whereas the sequential data were fed into bidirectional long short-term memory (BiLSTM). After that, we combined these two models and trained again with 5-fold cross-validation in order to increase the character recognition accuracy. In this work, this combined model is referred to as a hybrid deep learning model. Finally, the experimental results showed that our proposed system achieved an alphabet recognition accuracy of 99.3% and a digit recognition accuracy of 99.5%. We also validated our proposed system using another publicly available 6DMG dataset. Our proposed system provided better recognition accuracy compared to the existing system. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

19 pages, 1995 KiB  
Article
Experimental Analysis of Security Attacks for Docker Container Communications
Electronics 2023, 12(4), 940; https://doi.org/10.3390/electronics12040940 - 13 Feb 2023
Cited by 3 | Viewed by 2876
Abstract
Docker has become widely used as an open-source platform for packaging and running applications as containers. It is in the limelight especially at companies and IT developers that provide cloud services thanks to its advantages such as the portability of applications and being [...] Read more.
Docker has become widely used as an open-source platform for packaging and running applications as containers. It is in the limelight especially at companies and IT developers that provide cloud services thanks to its advantages such as the portability of applications and being lightweight. Docker provides communication between multiple containers through internal network configuration, which makes it easier to configure various services by logically connecting containers to each other, but cyberattacks exploiting the vulnerabilities of the Docker container network, e.g., distributed denial of service (DDoS) and cryptocurrency mining attacks, have recently occurred. In this paper, we experiment with cyberattacks such as ARP spoofing, DDoS, and elevation of privilege attacks to show how attackers can execute various attacks and analyze the results in terms of network traffic, CPU consumption, and malicious reverse shell execution. In addition, by examining the attacks from the network perspective of the Docker container environment, we lay the groundwork for detecting and preventing lateral movement attacks that may occur between the Docker containers. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

11 pages, 808 KiB  
Article
Comparison of Deep Learning Models for Automatic Detection of Sarcasm Context on the MUStARD Dataset
Electronics 2023, 12(3), 666; https://doi.org/10.3390/electronics12030666 - 29 Jan 2023
Cited by 5 | Viewed by 1517
Abstract
Sentiment analysis is a major area of natural language processing (NLP) research, and its sub-area of sarcasm detection has received growing interest in the past decade. Many approaches have been proposed, from basic machine learning to multi-modal deep learning solutions, and progress has [...] Read more.
Sentiment analysis is a major area of natural language processing (NLP) research, and its sub-area of sarcasm detection has received growing interest in the past decade. Many approaches have been proposed, from basic machine learning to multi-modal deep learning solutions, and progress has been made. Context has proven to be instrumental for sarcasm and many techniques that use context to identify sarcasm have emerged. However, no NLP research has focused on sarcasm-context detection as the main topic. Therefore, this paper proposes an approach for the automatic detection of sarcasm context, aiming to develop models that can correctly identify the contexts in which sarcasm may occur or is appropriate. Using an established dataset, MUStARD, multiple models are trained and benchmarked to find the best performer for sarcasm-context detection. This performer is proven to be an attention-based long short-term memory architecture that achieves an F1 score of 60.1. Furthermore, we tested the performance of this model on the SARC dataset and compared it with other results reported in the literature to better assess the effectiveness of this approach. Future directions of study are opened, with the prospect of developing a conversational agent that could identify and even respond to sarcasm. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

16 pages, 406 KiB  
Article
Digital Service Platform and Innovation in Healthcare: Measuring Users’ Satisfaction and Implications
Electronics 2023, 12(3), 662; https://doi.org/10.3390/electronics12030662 - 28 Jan 2023
Cited by 1 | Viewed by 1879
Abstract
When it comes to scheduling health consultations, e-appointment systems are helpful for patients. Non-attendance is a common obstacle that many medical practitioners must endure when it comes to the management of appointments in healthcare facilities and outpatient health settings. Prior surveys have found [...] Read more.
When it comes to scheduling health consultations, e-appointment systems are helpful for patients. Non-attendance is a common obstacle that many medical practitioners must endure when it comes to the management of appointments in healthcare facilities and outpatient health settings. Prior surveys have found that many users are open to use such mechanisms and that patients would be likely to schedule an online appointment with their doctor if such a system was made accessible. Few studies have sought to determine how well e-appointment systems work, how well they are received by their users, and whether or not they increase the number of appointments booked. The purpose of this research was to collect information that would help executives of a state hospital in Thessaloniki, Greece, to improve their electronic appointment system by measuring the level of satisfaction their patients have with it. The results show that the level of service provided by the electronic appointment system is not satisfactory. The quality of the website is another significant factor that does not contribute to the level of satisfaction experienced by patients. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

17 pages, 721 KiB  
Article
Dual-Channel Edge-Featured Graph Attention Networks for Aspect-Based Sentiment Analysis
Electronics 2023, 12(3), 624; https://doi.org/10.3390/electronics12030624 - 26 Jan 2023
Cited by 3 | Viewed by 1389
Abstract
The goal of aspect-based sentiment analysis (ABSA) is to identify the sentiment polarity of specific aspects in a context. Recently, graph neural networks have employed dependent tree syntactic information to assess the link between aspects and contextual words; nevertheless, most of this research [...] Read more.
The goal of aspect-based sentiment analysis (ABSA) is to identify the sentiment polarity of specific aspects in a context. Recently, graph neural networks have employed dependent tree syntactic information to assess the link between aspects and contextual words; nevertheless, most of this research has neglected phrases that are insensitive to syntactic analysis and the effect between various aspects in a sentence. In this paper, we propose a dual-channel edge-featured graph attention networks model (AS-EGAT), which builds an aspect syntactic graph by enhancing the contextual syntactic dependency representation of key aspect words and the mutual affective relationship between various aspects in the context and builds a semantic graph through the self-attention mechanism. We use the edge features as a significant factor to determine the weight coefficient of the attention mechanism to efficiently mine the edge features of the graph attention networks model (GAT). As a result, the model can connect important sentiment features of related aspects when dealing with aspects that lack obvious sentiment expressions, pay close attention to important word aspects when dealing with multiple-word aspects, and extract sentiment features from sentences that are not sensitive to syntactic dependency trees by looking at semantic features. Experimental results show that our proposed AS-EGAT model is superior to the current state-of-the-art baselines. Compared with the baseline models of LAP14, REST15, REST16, MAMS, T-shirt, and Television datasets, the accuracy of our AS-EGAT model increased by 0.76%, 0.29%, 0.05%, 0.15%, 0.22%, and 0.38%, respectively. The macro-f1 score increased by 1.16%, 1.16%, 1.23%, 0.37%, 0.53%, and 1.93% respectively. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

16 pages, 683 KiB  
Article
DNN-Based Forensic Watermark Tracking System for Realistic Content Copyright Protection
Electronics 2023, 12(3), 553; https://doi.org/10.3390/electronics12030553 - 20 Jan 2023
Cited by 1 | Viewed by 1791
Abstract
The metaverse-related content market is active and the demand for immersive content is increasing. However, there is no definition for granting copyrights to the content produced using artificial intelligence and discussions are still ongoing. We expect that the need for copyright protection for [...] Read more.
The metaverse-related content market is active and the demand for immersive content is increasing. However, there is no definition for granting copyrights to the content produced using artificial intelligence and discussions are still ongoing. We expect that the need for copyright protection for immersive content used in the metaverse environment will emerge and that related copyright protection techniques will be required. In this paper, we present the idea of 3D-to-2D watermarking so that content creators can protect the copyright of immersive content available in the metaverse environment. We propose an immersive content copyright protection using a deep neural network (DNN), a neural network composed of multiple hidden layers, and a forensic watermark. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

18 pages, 3765 KiB  
Article
Software Development for Processing and Analysis of Data Generated by Human Eye Movements
Electronics 2023, 12(3), 485; https://doi.org/10.3390/electronics12030485 - 17 Jan 2023
Viewed by 1322
Abstract
This research focuses on a software application providing opportunities for the processing and analysis of data generated by a saccade sensor with human eye movements. The main functional opportunities of the developed application are presented as well. According to the methodology of the [...] Read more.
This research focuses on a software application providing opportunities for the processing and analysis of data generated by a saccade sensor with human eye movements. The main functional opportunities of the developed application are presented as well. According to the methodology of the experiments, three experiments were prepared. The first was related to visualization of the stimuli on a stimulation computer display that was integrated into the developed application as a separate module. The second experiment was related to an interactive visualization of the projection of the eye movement of the participants in the experiment onto the stimulation computer display. The third experiment was related to an analysis of aggregated data on the decision time and the number of correct responses given by the participants to visual tasks. The tests showed that the application can be used as a stimulation center to visualize the stimuli and to recreate the experimental sessions. The summary of the results led to the conclusion that the number of correct responses to the visual tasks depended both on the type of motion of the stimuli and on the size of displacement from the center of the aperture. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

18 pages, 3839 KiB  
Article
Handwritten Numeral Recognition Integrating Start–End Points Measure with Convolutional Neural Network
Electronics 2023, 12(2), 472; https://doi.org/10.3390/electronics12020472 - 16 Jan 2023
Cited by 1 | Viewed by 1739
Abstract
Convolutional neural network (CNN) based methods have succeeded for handwritten numeral recognition (HNR) applications. However, CNN seems to misclassify similarly shaped numerals (i.e., the silhouette of the numerals that look the same). This paper presents an enhanced HNR system to improve the classification [...] Read more.
Convolutional neural network (CNN) based methods have succeeded for handwritten numeral recognition (HNR) applications. However, CNN seems to misclassify similarly shaped numerals (i.e., the silhouette of the numerals that look the same). This paper presents an enhanced HNR system to improve the classification accuracy of the similarly shaped handwritten numerals incorporating the terminals points with CNN’s recognition, which can be utilized in various emerging applications related to language translation. In handwritten numerals, the terminal points (i.e., the start and end positions) are considered additional properties to discriminate between similarly shaped numerals. Start–End Writing Measure (SEWM) and its integration with CNN is the main contribution of this research. Traditionally, the classification outcome of a CNN-based system is considered according to the highest probability exposed for a particular numeral category. In the proposed system, along with such classification, its probability value (i.e., CNN’s confidence level) is also used as a regulating element. Parallel to CNN’s classification operation, SEWM measures the start-end points of the numeral image, suggesting the numeral category for which measured start-end points are found close to reference start-end points of the numeral class. Finally, the output label or system’s classification of the given numeral image is provided by comparing the confidence level with a predefined threshold value. SEWM-CNN is a suitable HNR method for Bengali and Devanagari numerals compared with other existing methods. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

12 pages, 364 KiB  
Article
Information Systems Strategy and Security Policy: A Conceptual Framework
Electronics 2023, 12(2), 382; https://doi.org/10.3390/electronics12020382 - 11 Jan 2023
Cited by 2 | Viewed by 2676
Abstract
As technology evolves, businesses face new threats and opportunities in the areas of information and information assets. These areas include information creation, refining, storage, and dissemination. Governments and other organizations around the world have begun prioritizing the protection of cyberspace as a pressing [...] Read more.
As technology evolves, businesses face new threats and opportunities in the areas of information and information assets. These areas include information creation, refining, storage, and dissemination. Governments and other organizations around the world have begun prioritizing the protection of cyberspace as a pressing international issue, prompting a renewed emphasis on information security strategy development and implementation. While every nation’s information security strategy is crucial, there has not been much work conducted to define a method for gauging national cybersecurity attitudes that takes into account factors and indicators that are specific to that nation. In order to develop a framework that incorporates issues based on the current research in this area, this paper will examine the fundamentals of the information security strategy and the factors that affect its integration. This paper contributes by providing a model based on the ITU cybersecurity decisions, with the goal of developing a roadmap for the successful development and implementation of the National Cybersecurity Strategy in Greece, as well as identifying the factors at the national level that may be aligned with a country’s cybersecurity level. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

13 pages, 2017 KiB  
Article
Symmetrical Hardware-Software Design for Improving Physical Activity with a Gamified Music Step Sensor Box
Electronics 2023, 12(2), 368; https://doi.org/10.3390/electronics12020368 - 11 Jan 2023
Cited by 1 | Viewed by 1921
Abstract
Physical inactivity, the fourth leading cause of death worldwide, can harm the economy, national growth, community welfare, health, and quality of life. On the other hand, physical activities (PA) have numerous advantages, including fewer cardiovascular diseases, cancer, and diabetes, fewer psychological disorders, and [...] Read more.
Physical inactivity, the fourth leading cause of death worldwide, can harm the economy, national growth, community welfare, health, and quality of life. On the other hand, physical activities (PA) have numerous advantages, including fewer cardiovascular diseases, cancer, and diabetes, fewer psychological disorders, and improved cognitive abilities. Despite the benefits of PA, people are less likely to participate. The main factor is a lack of entertainment in exercise, which demotivates society from engaging in healthy activities. In this work, we proposed a hardware-software symmetry that can entertain people while performing PA. We developed a step-box with sensors and a gamified music application synchronized with the footsteps. The purpose of this study is to show that incorporating appropriate gamification allows participants to engage actively in tedious and economic exercises. Participants (N = 90) participated in 20-min daily exercise sessions for three days. A 5-point Likert scale was used to assess efficiency, effectiveness, and satisfaction following exercise sessions. The results show that the gamified sensor step-box increased efficiency, effectiveness, and participant satisfaction. The findings suggest that gamification fundamentals in simple exercises increase excitement and may help people to maintain PA. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

19 pages, 2120 KiB  
Article
Multi-Vehicle Trajectory Tracking towards Digital Twin Intersections for Internet of Vehicles
Electronics 2023, 12(2), 275; https://doi.org/10.3390/electronics12020275 - 05 Jan 2023
Cited by 5 | Viewed by 1283
Abstract
Digital Twin (DT) provides a novel idea for Intelligent Transportation Systems (ITS), while Internet of Vehicles (IoV) provides numerous positioning data of vehicles. However, complex interactions between vehicles as well as offset and loss of measurements can lead to tracking errors of DT [...] Read more.
Digital Twin (DT) provides a novel idea for Intelligent Transportation Systems (ITS), while Internet of Vehicles (IoV) provides numerous positioning data of vehicles. However, complex interactions between vehicles as well as offset and loss of measurements can lead to tracking errors of DT trajectories. In this paper, we propose a multi-vehicle trajectory tracking framework towards DT intersections (MVT2DTI). Firstly, the positioning data is unified to the same coordinate system and associated with the tracked trajectories via matching. Secondly, a spatial–temporal tracker (STT) utilizes long short-term memory network (LSTM) and graph attention network (GAT) to extract spatial–temporal features for state prediction. Then, the distance matrix is computed as a proposed tracking loss that feeds tracking errors back to the tracker. Through the iteration of association and prediction, the unlabeled coordinates are connected into the DT trajectories. Finally, four datasets are generated to validate the effectiveness and efficiency of the framework. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

24 pages, 481 KiB  
Article
GEAR: A General Inference Engine for Automated MultiStrategy Reasoning
Electronics 2023, 12(2), 256; https://doi.org/10.3390/electronics12020256 - 04 Jan 2023
Cited by 5 | Viewed by 1229
Abstract
The pervasive use of AI today caused an urgent need for human-compliant AI approaches and solutions that can explain their behavior and decisions in human-understandable terms, especially in critical domains, so as to enforce trustworthiness and support accountability. The symbolic/logic approach to AI [...] Read more.
The pervasive use of AI today caused an urgent need for human-compliant AI approaches and solutions that can explain their behavior and decisions in human-understandable terms, especially in critical domains, so as to enforce trustworthiness and support accountability. The symbolic/logic approach to AI supports this need because it aims at reproducing human reasoning mechanisms. While much research has been carried out on single inference strategies, an overall approach to combine them is still missing. This paper claims the need for a new overall approach that merges all the single strategies, named MultiStrategy Reasoning. Based on an analysis of research on automated inference in AI, it selects a suitable setting for this approach, reviews the most promising approaches proposed for single inference strategies, and proposes a possible combination of deduction, abduction, abstraction, induction, argumentation, uncertainty and analogy. It also introduces the GEAR (General Engine for Automated Reasoning) inference engine, that has been developed to implement this vision. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

16 pages, 1446 KiB  
Article
Hausdorff Distance and Similarity Measures for Single-Valued Neutrosophic Sets with Application in Multi-Criteria Decision Making
Electronics 2023, 12(1), 201; https://doi.org/10.3390/electronics12010201 - 31 Dec 2022
Cited by 19 | Viewed by 1582
Abstract
Hausdorff distance is one of the important distance measures to study the degree of dissimilarity between two sets that had been used in various fields under fuzzy environments. Among those, the framework of single-valued neutrosophic sets (SVNSs) is the one that has more [...] Read more.
Hausdorff distance is one of the important distance measures to study the degree of dissimilarity between two sets that had been used in various fields under fuzzy environments. Among those, the framework of single-valued neutrosophic sets (SVNSs) is the one that has more potential to explain uncertain, inconsistent and indeterminate information in a comprehensive way. And so, Hausdorff distance for SVNSs is important. Thus, we propose two novel schemes to calculate the Hausdorff distance and its corresponding similarity measures (SMs) for SVNSs. In doing so, we firstly develop the two forms of Hausdorff distance between SVNSs based on the definition of Hausdorff metric between two sets. We then use these new distance measures to construct several SMs for SVNSs. Some mathematical theorems regarding the proposed Hausdorff distances for SVNSs are also proven to strengthen its theoretical properties. In order to show the exact calculation behavior and distance measurement mechanism of our proposed methods in accordance with the decorum of Hausdorff metric, we utilize an intuitive numerical example that demonstrate the novelty and practicality of our proposed measures. Furthermore, we develop a multi-criteria decision making (MCDM) method under single-valued neutrosophic environment using the proposed SMs based on our defined Hausdorff distance measures, called as a single-valued neutrosophic MCDM (SVN-MCDM) method. In this connection, we employ our proposed SMs to compute the degree of similarity of each option with the ideal choice to identify the best alternative as well as to perform an overall ranking of the alternatives under study. We then apply our proposed SVN-MCDM scheme to solve two real world problems of MCDM under single-valued neutrosophic environment to show its effectiveness and application. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

18 pages, 2652 KiB  
Article
ROS System Facial Emotion Detection Using Machine Learning for a Low-Cost Robot Based on Raspberry Pi
Electronics 2023, 12(1), 90; https://doi.org/10.3390/electronics12010090 - 26 Dec 2022
Cited by 4 | Viewed by 2448
Abstract
Facial emotion recognition (FER) is a field of research with multiple solutions in the state-of-the-art, focused on fields such as security, marketing or robotics. In the literature, several articles can be found in which algorithms are presented from different perspectives for detecting emotions. [...] Read more.
Facial emotion recognition (FER) is a field of research with multiple solutions in the state-of-the-art, focused on fields such as security, marketing or robotics. In the literature, several articles can be found in which algorithms are presented from different perspectives for detecting emotions. More specifically, in those emotion detection systems in the literature whose computational cores are low-cost, the results presented are usually in simulation or with quite limited real tests. This article presents a facial emotion detection system—detecting emotions such as anger, happiness, sadness or surprise—that was implemented under the Robot Operating System (ROS), Noetic version, and is based on the latest machine learning (ML) techniques proposed in the state-of-the-art. To make these techniques more efficient, and that they can be executed in real time on a low-cost board, extensive experiments were conducted in a real-world environment using a low-cost general purpose board, the Raspberry Pi 4 Model B. The final achieved FER system proposed in this article is capable of plausibly running in real time, operating at more than 13 fps, without using any external accelerator hardware, as other works (widely introduced in this article) do need in order to achieve the same purpose. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

16 pages, 2911 KiB  
Article
An Approach for Matrix Multiplication of 32-Bit Fixed Point Numbers by Means of 16-Bit SIMD Instructions on DSP
Electronics 2023, 12(1), 78; https://doi.org/10.3390/electronics12010078 - 25 Dec 2022
Cited by 3 | Viewed by 1992
Abstract
Matrix multiplication is an important operation for many engineering applications. Sometimes new features that include matrix multiplication should be added to existing and even out-of-date embedded platforms. In this paper, an unusual problem is considered: how to implement matrix multiplication of 32-bit signed [...] Read more.
Matrix multiplication is an important operation for many engineering applications. Sometimes new features that include matrix multiplication should be added to existing and even out-of-date embedded platforms. In this paper, an unusual problem is considered: how to implement matrix multiplication of 32-bit signed integers and fixed-point numbers on DSP having SIMD instructions for 16-bit integers only. For examined tasks, matrix size may vary from several tens to two hundred. The proposed mathematical approach for dense rectangular matrix multiplication of 32-bit numbers comprises decomposition of 32-bit matrices to matrices of 16-bit numbers, four matrix multiplications of 16-bit unsigned integers via outer product, and correction of outcome for signed integers and fixed point numbers. Several tricks for performance optimization are analyzed. In addition, ways for block-wise and parallel implementations are described. An implementation of the proposed method by means of 16-bit vector instructions is faster than matrix multiplication using 32-bit scalar instructions and demonstrates performance close to a theoretically achievable limit. The described technique can be generalized for matrix multiplication of n-bit integers and fixed point numbers via handling with matrices of n/2-bit integers. In conclusion, recommendations for practitioners who work on implementation of matrix multiplication for various DSP are presented. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

16 pages, 2464 KiB  
Article
Explainable AI to Predict Male Fertility Using Extreme Gradient Boosting Algorithm with SMOTE
Electronics 2023, 12(1), 15; https://doi.org/10.3390/electronics12010015 - 21 Dec 2022
Cited by 6 | Viewed by 1899
Abstract
Infertility is a common problem across the world. Infertility distribution due to male factors ranges from 40% to 50%. Existing artificial intelligence (AI) systems are not often human interpretable. Further, clinicians are unaware of how data analytical tools make decisions, and as a [...] Read more.
Infertility is a common problem across the world. Infertility distribution due to male factors ranges from 40% to 50%. Existing artificial intelligence (AI) systems are not often human interpretable. Further, clinicians are unaware of how data analytical tools make decisions, and as a result, they have limited exposure to healthcare. Using explainable AI tools makes AI systems transparent and traceable, enhancing users’ trust and confidence in decision-making. The main contribution of this study is to introduce an explainable model for investigating male fertility prediction. Nine features related to lifestyle and environmental factors are utilized to develop a male fertility prediction model. Five AI tools, namely support vector machine, adaptive boosting, conventional extreme gradient boost (XGB), random forest, and extra tree algorithms are deployed with a balanced and imbalanced dataset. To produce our model in a trustworthy way, an explainable AI is applied. The techniques are (1) local interpretable model-agnostic explanations (LIME) and (2) Shapley additive explanations (SHAP). Additionally, ELI5 is utilized to inspect the feature’s importance. Finally, XGB outperformed and obtained an AUC of 0.98, which is optimal compared to existing AI systems. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

17 pages, 8332 KiB  
Article
Development of Manipulator Digital Twin Experimental Platform Based on RCP
Electronics 2022, 11(24), 4196; https://doi.org/10.3390/electronics11244196 - 15 Dec 2022
Viewed by 1183
Abstract
From the perspective of teaching and researching, we developed a manipulator digital twin experiment platform (named the remote experience platform, REP) based on a rapid control prototype (RCP). The platform consisted of a controlled target, a real-time controller, rapid prototype configuration software, and [...] Read more.
From the perspective of teaching and researching, we developed a manipulator digital twin experiment platform (named the remote experience platform, REP) based on a rapid control prototype (RCP). The platform consisted of a controlled target, a real-time controller, rapid prototype configuration software, and supervisory control software. The controlled target was a 6-DOF manipulator, divided into a physical entity and its digital twin. The 3D model and mathematical model of the manipulator were constructed as an experimental entity in a digital space. The whole system provided flexible and intuitive experimental scenes without the restraints of time and place. Based on RCP technology, students can design various complex control strategies using simulation tools such as Matlab/Simulink, then convert the graphical model into executable code to be performed in target hardware. The framework and development methods of the proposed system are elaborated in this paper. An example is demonstrated, including invocation of algorithms, one-click code generation and compilation, real-time verification and online parameter adjustment, and more. The feasibility and practicability of the system are verified through the PID control experiment of the manipulator. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

13 pages, 807 KiB  
Article
An Artificial Visual System for Three Dimensional Motion Direction Detection
Electronics 2022, 11(24), 4161; https://doi.org/10.3390/electronics11244161 - 13 Dec 2022
Cited by 1 | Viewed by 950
Abstract
For mammals, enormous amounts of visual information are processed by neurons of the visual nervous system. The research of the direction selectivity is of great significance and local direction-selective ganglion neurons have been discovered. However, research is still at the one dimensional level [...] Read more.
For mammals, enormous amounts of visual information are processed by neurons of the visual nervous system. The research of the direction selectivity is of great significance and local direction-selective ganglion neurons have been discovered. However, research is still at the one dimensional level and concentrated on a single cell. It remains challenging to explain the function and mechanism of the overall motion direction detection. In our previous papers, we have proposed a motion direction detection mechanism on the two dimensional level to solve these problems. The previous studies did not take into account that the information in the left and right retina is different and cannot be used to detect the three dimensional motion direction. Further effort is required to develop a more realistic system in three dimensions. In this paper, we propose a new three-dimensional artificial visual system to extend motion direction detection mechanism into three dimensions. We assumed that a neuron could detect the local motion of a single voxel object within three dimensional space. We also took into consideration that the information of the left and right retinas is different. Based on this binocular disparity, a realistic motion direction mechanism for three dimensions was established: the neurons received signals from the primary visual cortex of each eye and responded to motion in specific directions. There are a series of local direction-selective ganglion neurons arrayed on the retina by a logical AND operation. The response of each local direction detection neuron will be further integrated by the next neural layer to obtain the global motion direction. We carry out several computer simulations to demonstrate the validity of the mechanism. It shows that the proposed mechanism is capable of detecting the motion of complex three dimensional objects, which is consistent with most known physiological experimental results. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

13 pages, 2103 KiB  
Article
Smart Random Walk Distributed Secured Edge Algorithm Using Multi-Regression for Green Network
Electronics 2022, 11(24), 4141; https://doi.org/10.3390/electronics11244141 - 12 Dec 2022
Viewed by 1047
Abstract
Smart communication has significantly advanced with the integration of the Internet of Things (IoT). Many devices and online services are utilized in the network system to cope with data gathering and forwarding. Recently, many traffic-aware solutions have explored autonomous systems to attain the [...] Read more.
Smart communication has significantly advanced with the integration of the Internet of Things (IoT). Many devices and online services are utilized in the network system to cope with data gathering and forwarding. Recently, many traffic-aware solutions have explored autonomous systems to attain the intelligent routing and flowing of internet traffic with the support of artificial intelligence. However, the inefficient usage of nodes’ batteries and long-range communication degrades the connectivity time for the deployed sensors with the end devices. Moreover, trustworthy route identification is another significant research challenge for formulating a smart system. Therefore, this paper presents a smart Random walk Distributed Secured Edge algorithm (RDSE), using a multi-regression model for IoT networks, which aims to enhance the stability of the chosen IoT network with the support of an optimal system. In addition, by using secured computing, the proposed architecture increases the trustworthiness of smart devices with the least node complexity. The proposed algorithm differs from other works in terms of the following factors. Firstly, it uses the random walk to form the initial routes with certain probabilities, and later, by exploring a multi-variant function, it attains long-lasting communication with a high degree of network stability. This helps to improve the optimization criteria for the nodes’ communication, and efficiently utilizes energy with the combination of mobile edges. Secondly, the trusted factors successfully identify the normal nodes even when the system is compromised. Therefore, the proposed algorithm reduces data risks and offers a more reliable and private system. In addition, the simulations-based testing reveals the significant performance of the proposed algorithm in comparison to the existing work. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

21 pages, 1977 KiB  
Article
Approach for Designing Real-Time IoT Systems
Electronics 2022, 11(24), 4120; https://doi.org/10.3390/electronics11244120 - 10 Dec 2022
Cited by 3 | Viewed by 1210
Abstract
Along with the rapid development of Internet of Things (IoT) technology over the past few years, opportunities for its implementation in service areas that require real-time requirements have begun to be recognized. In this regard, one of the most important criteria is to [...] Read more.
Along with the rapid development of Internet of Things (IoT) technology over the past few years, opportunities for its implementation in service areas that require real-time requirements have begun to be recognized. In this regard, one of the most important criteria is to maintain Quality of Service (QoS) parameters at an appropriate and sufficiently high level. The QoS level should ensure the delivery of data packets in the shortest time possible while preventing critical parameters relevant to real-time transmission from being exceeded. This article proposes a new methodology for designing real-time IoT systems. The premise of the proposed approach is to adapt selected solutions used in other types of systems working with real-time requirements. Some analogy to embedded systems with a distributed architecture has been noted and used in this regard. The main differences from the concept of built-in systems can primarily be seen in the communication layer. The methodology proposed in this article is based on the authors’ proposed model of real-time system functional specification and its mapping to the IoT architecture. In addition, the developed methodology makes extensive use of selected IoT architecture elements described in this article, as well as selected task scheduling methods and communication protocols. The proposed methodology for designing RTIoT systems is based on dedicated transmission serialization methods and dedicated routing protocols. These methods ensure that the time constraints for the assumed bandwidth of IoT links are met by appropriately prioritizing transmissions and determining communication routes. The presented approach can be used to design a broad class of RTIoT systems. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

15 pages, 905 KiB  
Article
Hybrid Encryption Scheme for Medical Imaging Using AutoEncoder and Advanced Encryption Standard
Electronics 2022, 11(23), 3967; https://doi.org/10.3390/electronics11233967 - 30 Nov 2022
Cited by 2 | Viewed by 1332
Abstract
Recently, medical image encryption has gained special attention due to the nature and sensitivity of medical data and the lack of effective image encryption using innovative encryption techniques. Several encryption schemes have been recommended and developed in an attempt to improve medical image [...] Read more.
Recently, medical image encryption has gained special attention due to the nature and sensitivity of medical data and the lack of effective image encryption using innovative encryption techniques. Several encryption schemes have been recommended and developed in an attempt to improve medical image encryption. The majority of these studies rely on conventional encryption techniques. However, such improvements have come with increased computational complexity and slower processing for encryption and decryption processes. Alternatively, the engagement of intelligent models such as deep learning along with encryption schemes exhibited more effective outcomes, especially when used with digital images. This paper aims to reduce and change the transferred data between interested parties and overcome the problem of building negative conclusions from encrypted medical images. In order to do so, the target was to transfer from the domain of encrypting an image to encrypting features of an image, which are extracted as float number values. Therefore, we propose a deep learning-based image encryption scheme using the autoencoder (AE) technique and the advanced encryption standard (AES). Specifically, the proposed encryption scheme is supposed to encrypt the digest of the medical image prepared by the encoder from the autoencoder model on the encryption side. On the decryption side, the analogous decoder from the auto-decoder is used after decrypting the carried data. The autoencoder was used to enhance the quality of corrupted medical images with different types of noise. In addition, we investigated the scores of structure similarity (SSIM) and mean square error (MSE) for the proposed model by applying four different types of noise: salt and pepper, speckle, Poisson, and Gaussian. It has been noticed that for all types of noise added, the decoder reduced this noise in the resulting images. Finally, the performance evaluation demonstrated that our proposed system improved the encryption/decryption overhead by 50–75% over other existing models. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

18 pages, 1910 KiB  
Article
Orientation Detection System Based on Edge-Orientation Selective Neurons
Electronics 2022, 11(23), 3946; https://doi.org/10.3390/electronics11233946 - 29 Nov 2022
Viewed by 991
Abstract
In this paper, we propose a mechanism of orientation detection system based on edge-orientation selective neurons. We assume that there are neurons in the V1 that can generate response to object’s edge, and each neuron has the optimal response to specific orientation in [...] Read more.
In this paper, we propose a mechanism of orientation detection system based on edge-orientation selective neurons. We assume that there are neurons in the V1 that can generate response to object’s edge, and each neuron has the optimal response to specific orientation in a local receptive field. The global orientation is inferred from the aggregation of local orientation information. An orientation detection system is further developed based on the proposed mechanism. We design four types of neurons for four local orientations and used these neurons to extract local orientation information. The global orientation is obtained according to the neuron with the most activation. The performance of this orientation detection system is evaluated on orientation detection tasks. From the experiment results, we can conclude that our proposed global orientation mechanism is feasible and explainable. The mechanism-based orientation detection system shows better recognition accuracy and noise immunity than the traditional convolution neural network-based orientation detection systems and EfficientNet-based orientation detection system, which have the most accuracy for now. In addition, our edge-orientation selective cell based artificial visual system can greatly save time and learning cost compared to the traditional convolution neural network and EfficientNet. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

13 pages, 4739 KiB  
Article
Detection of Fake Replay Attack Signals on Remote Keyless Controlled Vehicles Using Pre-Trained Deep Neural Network
Electronics 2022, 11(20), 3376; https://doi.org/10.3390/electronics11203376 - 19 Oct 2022
Cited by 5 | Viewed by 2373
Abstract
Keyless systems have replaced the old-fashioned methods of inserting physical keys into keyholes to unlock the door, which are inconvenient and easily exploited by threat actors. Keyless systems use the technology of radio frequency (RF) as an interface to transmit signals from the [...] Read more.
Keyless systems have replaced the old-fashioned methods of inserting physical keys into keyholes to unlock the door, which are inconvenient and easily exploited by threat actors. Keyless systems use the technology of radio frequency (RF) as an interface to transmit signals from the key fob to the vehicle. However, keyless systems are also susceptible to being compromised by a threat actor who intercepts the transmitted signal and performs a replay attack. In this paper, we propose a transfer learning-based model to identify the replay attacks launched against remote keyless controlled vehicles. Specifically, the system makes use of a pre-trained ResNet50 deep neural network to predict the wireless remote signals used to lock or unlock doors of a remote-controlled vehicle system. The signals are finally classified into three classes: real signal, fake signal high gain, and fake signal low gain. We have trained our model with 100 epochs (3800 iterations) on a KeFRA 2022 dataset, a modern dataset. The model has recorded a final validation accuracy of 99.71% and a final validation loss of 0.29% at a low inferencing time of 50 ms for the model-based SGD solver. The experimental evaluation revealed the supremacy of the proposed model. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

13 pages, 1672 KiB  
Article
Majority Approximators for Low-Latency Data Bus Inversion
Electronics 2022, 11(20), 3352; https://doi.org/10.3390/electronics11203352 - 17 Oct 2022
Viewed by 1208
Abstract
Data bus inversion (DBI) is an encoding technique that saves power in data movement in which the majority function plays an essential role. For a latency optimization, the majority function can be replaced by a majority approximator that allows for a small error [...] Read more.
Data bus inversion (DBI) is an encoding technique that saves power in data movement in which the majority function plays an essential role. For a latency optimization, the majority function can be replaced by a majority approximator that allows for a small error in majority voting to obtain a faster encoder that still saves power. In this work, we propose two systematic approaches for finding high-performance majority approximators. First, we perform an exhaustive search of all possible Boolean functions to find an optimal approximator based on a certain circuit structure comprised of fifteen logic gates. The approximator found by the systematic search can be implemented using compound gates, resulting in a latency-efficient design with only two gate levels. Compared with prior works using a heuristic idea, the proposed circuit runs at the same speed but achieves greater switching activity savings. Second, we propose another majority approximator using the average of three randomly permuted copies of the approximator found in the first approach. We show that the second proposed approximator achieves even higher savings in switching activity as its function is closer to a true majority voter. We report various performance metrics of the newly found majority approximators based on syntheses using a 65 nm process. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

15 pages, 3988 KiB  
Article
Cloud-Based, Expandable—Reconfigurable Remote Laboratory for Electronic Engineering Experiments
Electronics 2022, 11(20), 3292; https://doi.org/10.3390/electronics11203292 - 12 Oct 2022
Cited by 2 | Viewed by 2034
Abstract
This article describes the design and development of the NI myRIO device-based remote laboratory. The cloud-based, expandable, and reconfigurable remote laboratory is intended to give students access to an online web-based user interface to perform experiments. Multiple myRIO devices are programmed to host [...] Read more.
This article describes the design and development of the NI myRIO device-based remote laboratory. The cloud-based, expandable, and reconfigurable remote laboratory is intended to give students access to an online web-based user interface to perform experiments. Multiple myRIO devices are programmed to host several experiments each. A finite state machine is used to select specific experiments, while a single state can contain several. The laboratory web virtual instruments interfaces are hosted on the SystemLink cloud and SystemLink server. A user-friendly interface has been designed to help students to understand important electronic concepts. Virtual and real experiments were fused to give students a wide range of experiments they can access online. The instructor can check outputs of an experiment being executed on the device. Achieving connection between myRIO and SystemLink through global variables and SystemLink ensured that the low-cost device was utilized, and this is suitable for third-world countries’ universities that cannot afford expensive equipment. Students can perform the experiments which have some resemblance to physical execution. The system is expandable in that the number of myRIO devices or number of experiments can be increased to suit changing requirements. The reconfigurability of the system is such that the finite state machine-based coding technique permits only one experiment to be selected, configure the system, and run while other experiments are idle. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

24 pages, 1416 KiB  
Article
A Secure Personal Health Record Sharing System with Key Aggregate Dynamic Searchable Encryption
Electronics 2022, 11(19), 3199; https://doi.org/10.3390/electronics11193199 - 06 Oct 2022
Cited by 1 | Viewed by 1492
Abstract
Recently, as interest in individualized health has increased, the Personal Health Record (PHR) has attracted a lot of attention for prognosis predictions and accurate diagnoses. Cloud servers have been used to manage the PHR system, but privacy concerns are evident since cloud servers [...] Read more.
Recently, as interest in individualized health has increased, the Personal Health Record (PHR) has attracted a lot of attention for prognosis predictions and accurate diagnoses. Cloud servers have been used to manage the PHR system, but privacy concerns are evident since cloud servers process the entire PHR, which contains the sensitive information of patients. In addition, cloud servers centrally manage the PHR system so patients lose direct control over their own PHR and cloud servers can be an attractive target for malicious users. Therefore, ensuring the integrity and privacy of the PHR and allocating authorization to users are important issues. In this paper, we propose a secure PHR sharing system using a blockchain, InterPlanetary File System (IPFS), and smart contract to ensure PHR integrity and secure verification. To guarantee the patient’s authority over the management of his/her own PHR, as well as provide convenient access, we suggest a key aggregate dynamic searchable encryption. We prove the security of the proposed scheme through informal and formal analyses including an Automated Verification of Internet Security Protocols and Applications (AVISPA) simulation, Burrows–Abadi–Needham (BAN) logic, and security-model-based games. Furthermore, we estimate the computational costs of the proposed scheme using a Multiprecision Integer and Rational Arithmetic Cryptographic Library (MIRACL) and compare the results with those of previous works. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

18 pages, 1798 KiB  
Article
Leveraging Machine Learning for Fault-Tolerant Air Pollutants Monitoring for a Smart City Design
Electronics 2022, 11(19), 3122; https://doi.org/10.3390/electronics11193122 - 29 Sep 2022
Cited by 3 | Viewed by 1234
Abstract
Air pollution has become a global issue due to its widespread impact on the environment, economy, civilization and human health. Owing to this, a lot of research and studies have been done to tackle this issue. However, most of the existing methodologies have [...] Read more.
Air pollution has become a global issue due to its widespread impact on the environment, economy, civilization and human health. Owing to this, a lot of research and studies have been done to tackle this issue. However, most of the existing methodologies have several issues such as high cost, low deployment, maintenance capabilities and uni-or bi-variate concentration of air pollutants. In this paper, a hybrid CNN-LSTM model is presented to forecast multivariate air pollutant concentration for the Internet of Things (IoT) enabled smart city design. The amalgamation of CNN-LSTM acts as an encoder-decoder which improves the overall accuracy and precision. The performance of the proposed CNN-LSTM is compared with conventional and hybrid machine learning (ML) models on the basis of Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE) and Mean Square Error (MSE). The proposed model outperforms various state-of-the-art ML models by generating an average MAE, MAPE and MSE of 54.80%, 52.78% and 60.02%. Furthermore, the predicted results are cross-validated with the actual concentration of air pollutants and the proposed model achieves a high degree of prediction accuracy to real-time air pollutants concentration. Moreover, a cross-grid cooperative scheme is proposed to tackle the IoT monitoring station malfunction scenario and make the pollutant monitoring more fault resistant and robust. The proposed scheme exploits the correlation between neighbouring monitoring stations and air pollutant concentration. The model generates an average MAPE and MSE of 10.90% and 12.02%, respectively. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

16 pages, 2000 KiB  
Article
Context-Based, Predictive Access Control to Electronic Health Records
Electronics 2022, 11(19), 3040; https://doi.org/10.3390/electronics11193040 - 24 Sep 2022
Cited by 2 | Viewed by 1332
Abstract
Effective access control techniques are in demand, as electronically assisted healthcare services require the patient’s sensitive health records. In emergency situations, where the patient’s well-being is jeopardized, different healthcare actors associated with emergency cases should be granted permission to access Electronic Health Records [...] Read more.
Effective access control techniques are in demand, as electronically assisted healthcare services require the patient’s sensitive health records. In emergency situations, where the patient’s well-being is jeopardized, different healthcare actors associated with emergency cases should be granted permission to access Electronic Health Records (EHRs) of patients. The research objective of our study is to develop machine learning techniques based on patients’ time sequential health metrics and integrate them with an Attribute Based Access Control (ABAC) mechanism. We propose an ABAC mechanism that can yield access to sensitive EHRs systems by applying prognostic context handlers where contextual information, is used to identify emergency conditions and permit access to medical records. Specifically, we use patients’ recent health history to predict the health metrics for the next two hours by leveraging Long Short Term Memory (LSTM) Neural Networks (NNs). These predicted health metrics values are evaluated by our personalized fuzzy context handlers, to predict the criticality of patients’ status. The developed access control method provides secure access for emergency clinicians to sensitive information and simultaneously safeguards the patient’s well-being. Integrating this predictive mechanism with personalized context handlers proved to be a robust tool to enhance the performance of the access control mechanism to modern EHRs System. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

21 pages, 3080 KiB  
Article
RISC-Vlim, a RISC-V Framework for Logic-in-Memory Architectures
Electronics 2022, 11(19), 2990; https://doi.org/10.3390/electronics11192990 - 21 Sep 2022
Cited by 2 | Viewed by 17280
Abstract
Most modern CPU architectures are based on the von Neumann principle, where memory and processing units are separate entities. Although processing unit performance has improved over the years, memory capacity has not followed the same trend, creating a performance gap between them. This [...] Read more.
Most modern CPU architectures are based on the von Neumann principle, where memory and processing units are separate entities. Although processing unit performance has improved over the years, memory capacity has not followed the same trend, creating a performance gap between them. This problem is known as the "memory wall" and severely limits the performance of a microprocessor. One of the most promising solutions is the "logic-in-memory" approach. It consists of merging memory and logic units, enabling data to be processed directly inside the memory itself. Here we propose an RISC-V framework that supports logic-in-memory operations. We substitute data memory with a circuit capable of storing data and of performing in-memory computation. The framework is based on a standard memory interface, so different logic-in-memory architectures can be inserted inside the microprocessor, based both on CMOS and emerging technologies. The main advantage of this framework is the possibility of comparing the performance of different logic-in-memory solutions on code execution. We demonstrate the effectiveness of the framework using a CMOS volatile memory and a memory based on a new emerging technology, racetrack logic. The results demonstrate an improvement in algorithm execution speed and a reduction in energy consumption. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

26 pages, 2347 KiB  
Article
A Multi-Objective Approach for Optimizing Edge-Based Resource Allocation Using TOPSIS
Electronics 2022, 11(18), 2888; https://doi.org/10.3390/electronics11182888 - 13 Sep 2022
Cited by 6 | Viewed by 1846
Abstract
Existing approaches for allocating resources on edge environments are inefficient and lack the support of heterogeneous edge devices, which in turn fail to optimize the dependency on cloud infrastructures or datacenters. To this extent, we propose in this paper OpERA, a multi-layered edge-based [...] Read more.
Existing approaches for allocating resources on edge environments are inefficient and lack the support of heterogeneous edge devices, which in turn fail to optimize the dependency on cloud infrastructures or datacenters. To this extent, we propose in this paper OpERA, a multi-layered edge-based resource allocation optimization framework that supports heterogeneous and seamless execution of offloadable tasks across edge, fog, and cloud computing layers and architectures. By capturing offloadable task requirements, OpERA is capable of identifying suitable resources within nearby edge or fog layers, thus optimizing the execution process. Throughout the paper, we present results which show the effectiveness of our proposed optimization strategy in terms of reducing costs, minimizing energy consumption, and promoting other residual gains in terms of processing computations, network bandwidth, and task execution time. We also demonstrate that by optimizing resource allocation in computation offloading, it is then possible to increase the likelihood of successful task offloading, particularly for computationally intensive tasks that are becoming integral as part of many IoT applications such robotic surgery, autonomous driving, smart city monitoring device grids, and deep learning tasks. The evaluation of our OpERA optimization algorithm reveals that the TOPSIS MCDM technique effectively identifies optimal compute resources for processing offloadable tasks, with a 96% success rate. Moreover, the results from our experiments with a diverse range of use cases show that our OpERA optimization strategy can effectively reduce energy consumption by up to 88%, and operational costs by 76%, by identifying relevant compute resources. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

11 pages, 1511 KiB  
Article
Multi-Class Positive and Unlabeled Learning for High Dimensional Data Based on Outlier Detection in a Low Dimensional Embedding Space
Electronics 2022, 11(17), 2789; https://doi.org/10.3390/electronics11172789 - 05 Sep 2022
Viewed by 1309
Abstract
Positive and unlabeled (PU) learning targets a binary classifier on labeled positive data and unlabeled data containing data samples of positive and unknown negative classes, whereas multi-class positive and unlabeled (MPU) learning aims to learn a multi-class classifier assuming labeled data from multiple [...] Read more.
Positive and unlabeled (PU) learning targets a binary classifier on labeled positive data and unlabeled data containing data samples of positive and unknown negative classes, whereas multi-class positive and unlabeled (MPU) learning aims to learn a multi-class classifier assuming labeled data from multiple positive classes. In this paper, we propose a two-step approach for MPU learning on high dimensional data. In the first step, negative samples are selected from unlabeled data using an ensemble of k-nearest neighbors-based outlier detection models in a low dimensional space which is embedded by a linear discriminant function. We present an approach for binary prediction which determines whether a data sample is a negative data sample. In the second step, the linear discriminant function is optimized on the labeled positive data and negative samples selected in the first step. It alternates between updating the parameters of the linear discriminant function and selecting reliable negative samples by detecting outliers in a low-dimensional space. Experimental results using high dimensional text data demonstrate the high performance of the proposed MPU learning method. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

20 pages, 886 KiB  
Article
Built-In Functional Testing of Analog In-Memory Accelerators for Deep Neural Networks
Electronics 2022, 11(16), 2592; https://doi.org/10.3390/electronics11162592 - 18 Aug 2022
Cited by 1 | Viewed by 1257
Abstract
The paper develops a methodology for the online built-in self-testing of deep neural network (DNN) accelerators to validate the correct operation with respect to their functional specifications. The DNN of interest is realized in the hardware to perform in-memory computing using non-volatile memory [...] Read more.
The paper develops a methodology for the online built-in self-testing of deep neural network (DNN) accelerators to validate the correct operation with respect to their functional specifications. The DNN of interest is realized in the hardware to perform in-memory computing using non-volatile memory cells as computational units. Assuming a functional fault model, we develop methods to generate pseudorandom and structured test patterns to detect hardware faults. We also develop a test-sequencing strategy that combines these different classes of tests to achieve high fault coverage. The testing methodology is applied to a broad class of DNNs trained to classify images from the MNIST, Fashion-MNIST, and CIFAR-10 datasets. The goal is to expose hardware faults which may lead to the incorrect classification of images. We achieve an average fault coverage of 94% for these different architectures, some of which are large and complex. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

16 pages, 1412 KiB  
Article
A Sustainable Deep Learning-Based Framework for Automated Segmentation of COVID-19 Infected Regions: Using U-Net with an Attention Mechanism and Boundary Loss Function
Electronics 2022, 11(15), 2296; https://doi.org/10.3390/electronics11152296 - 23 Jul 2022
Cited by 14 | Viewed by 1807
Abstract
COVID-19 has been spreading rapidly, affecting billions of people globally, with significant public health impacts. Biomedical imaging, such as computed tomography (CT), has significant potential as a possible substitute for the screening process. Because of this, automatic segmentation of images is highly desirable [...] Read more.
COVID-19 has been spreading rapidly, affecting billions of people globally, with significant public health impacts. Biomedical imaging, such as computed tomography (CT), has significant potential as a possible substitute for the screening process. Because of this, automatic segmentation of images is highly desirable as clinical decision support for an extensive evaluation of disease control and monitoring. It is a dynamic tool and performs a central role in precise or accurate segmentation of infected areas or regions in CT scans, thus helping in screening, diagnosing, and disease monitoring. For this purpose, we introduced a deep learning framework for automated segmentation of COVID-19 infected lesions/regions in lung CT scan images. Specifically, we adopted a segmentation model, i.e., U-Net, and utilized an attention mechanism to enhance the framework’s ability for the segmentation of virus-infected regions. Since all of the features extracted or obtained from the encoders are not valuable for segmentation; thus, we applied the U-Net architecture with a mechanism of attention for a better representation of the features. Moreover, we applied a boundary loss function to deal with small and unbalanced lesion segmentation’s. Using different public CT scan image data sets, we validated the framework’s effectiveness in contrast with other segmentation techniques. The experimental outcomes showed the improved performance of the presented framework for the automated segmentation of lungs and infected areas in CT scan images. We also considered both boundary loss and weighted binary cross-entropy dice loss function. The overall dice accuracies of the framework are 0.93 and 0.76 for lungs and COVID-19 infected areas/regions. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

16 pages, 3564 KiB  
Article
ECG Heartbeat Classification Using CONVXGB Model
Electronics 2022, 11(15), 2280; https://doi.org/10.3390/electronics11152280 - 22 Jul 2022
Cited by 3 | Viewed by 2106
Abstract
ELECTROCARDIOGRAM (ECG) signals are reliable in identifying and monitoring patients with various cardiac diseases and severe cardiovascular syndromes, including arrhythmia and myocardial infarction (MI). Thus, cardiologists use ECG signals in diagnosing cardiac diseases. Machine learning (ML) has also proven its usefulness in the [...] Read more.
ELECTROCARDIOGRAM (ECG) signals are reliable in identifying and monitoring patients with various cardiac diseases and severe cardiovascular syndromes, including arrhythmia and myocardial infarction (MI). Thus, cardiologists use ECG signals in diagnosing cardiac diseases. Machine learning (ML) has also proven its usefulness in the medical field and in signal classification. However, current ML approaches rely on hand-crafted feature extraction methods or very complicated deep learning networks. This paper presents a novel method for feature extraction from ECG signals and ECG classification using a convolutional neural network (CNN) with eXtreme Gradient Boosting (XBoost), ConvXGB. This model was established by stacking two convolutional layers for automatic feature extraction from ECG signals, followed by XGBoost as the last layer, which is used for classification. This technique simplified ECG classification in comparison to other methods by minimizing the number of required parameters and eliminating the need for weight readjustment throughout the backpropagation phase. Furthermore, experiments on two famous ECG datasets–the Massachusetts Institute of Technology–Beth Israel Hospital (MIT-BIH) and Physikalisch-Technische Bundesanstalt (PTB) datasets–demonstrated that this technique handled the ECG signal classification issue better than either CNN or XGBoost alone. In addition, a comparison showed that this model outperformed state-of-the-art models, with scores of 0.9938, 0.9839, 0.9836, 0.9837, and 0.9911 for accuracy, precision, recall, F1-score, and specificity, respectively. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

18 pages, 1368 KiB  
Article
To Use or Not to Use: Impact of Personality on the Intention of Using Gamified Learning Environments
Electronics 2022, 11(12), 1907; https://doi.org/10.3390/electronics11121907 - 18 Jun 2022
Cited by 5 | Viewed by 2218
Abstract
Technology acceptance is essential for technology success. However, individual users are known to differ in their tendency to adopt and interact with new technologies. Among the individual differences, personality has been shown to be a predictor of users’ beliefs about technology acceptance. Gamification, [...] Read more.
Technology acceptance is essential for technology success. However, individual users are known to differ in their tendency to adopt and interact with new technologies. Among the individual differences, personality has been shown to be a predictor of users’ beliefs about technology acceptance. Gamification, on the other hand, has been shown to be a good solution to improve students’ motivation and engagement while learning. Despite the growing interest in gamification, less research attention has been paid to the effect of personality, specifically based on the Five Factor model (FFM), on gamification acceptance in learning environments. Therefore, this study develops a model to elucidate how personality traits affect students’ acceptance of gamified learning environments and their continuance intention to use these environments. In particular, the Technology Acceptance Model (TAM) was used to examine the factors affecting students’ intentions to use a gamified learning environment. To test the research hypotheses, eighty-three students participated in this study, where structural equation modeling via Partial Least Squares (PLS) was performed. The obtained results showed that the research model, based on TAM and FFM, provides a comprehensive understanding of the behaviors related to the acceptance and intention to use gamified learning environments, as follows: (1) usefulness is the most influential factor toward intention to use the gamified learning environment; (2) unexpectedly, perceived ease of use has no significant effect on perceived usefulness and behavioral attitudes toward the gamified learning environment; (3) extraversion affects students’ perceived ease of use of the gamified learning environment; (4) neuroticism affects students’ perceived usefulness of the gamified learning environment; and, (5) Openness affects students’ behavioral attitudes toward using the gamified learning environment. This study can contribute to the Human–Computer Interaction field by providing researchers and practitioners with insights into how to motivate different students’ personality characteristics to continue using gamified learning environments for each personality trait. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

14 pages, 2087 KiB  
Article
Financial Data Anomaly Discovery Using Behavioral Change Indicators