16 pages, 348 KB  
Article
Performance Evaluation of Offline Speech Recognition on Edge Devices
by Santosh Gondi and Vineel Pratap
Electronics 2021, 10(21), 2697; https://doi.org/10.3390/electronics10212697 - 4 Nov 2021
Cited by 14 | Viewed by 8869
Abstract
Deep learning–based speech recognition applications have made great strides in the past decade. Deep learning–based systems have evolved to achieve higher accuracy while using simpler end-to-end architectures, compared to their predecessor hybrid architectures. Most of these state-of-the-art systems run on backend servers with [...] Read more.
Deep learning–based speech recognition applications have made great strides in the past decade. Deep learning–based systems have evolved to achieve higher accuracy while using simpler end-to-end architectures, compared to their predecessor hybrid architectures. Most of these state-of-the-art systems run on backend servers with large amounts of memory and CPU/GPU resources. The major disadvantage of server-based speech recognition is the lack of privacy and security for user speech data. Additionally, because of network dependency, this server-based architecture cannot always be reliable, performant and available. Nevertheless, offline speech recognition on client devices overcomes these issues. However, resource constraints on smaller edge devices may pose challenges for achieving state-of-the-art speech recognition results. In this paper, we evaluate the performance and efficiency of transformer-based speech recognition systems on edge devices. We evaluate inference performance on two popular edge devices, Raspberry Pi and Nvidia Jetson Nano, running on CPU and GPU, respectively. We conclude that with PyTorch mobile optimization and quantization, the models can achieve real-time inference on the Raspberry Pi CPU with a small degradation to word error rate. On the Jetson Nano GPU, the inference latency is three to five times better, compared to Raspberry Pi. The word error rate on the edge is still higher, but it is not too far behind, compared to that on the server inference. Full article
(This article belongs to the Special Issue Human Computer Interaction for Intelligent Systems)
Show Figures

Graphical abstract

17 pages, 4146 KB  
Article
Comparative Analysis of Performance between Multimodal Implementation of Chatbot Based on News Classification Data Using Categories
by Prasnurzaki Anki, Alhadi Bustamam and Rinaldi Anwar Buyung
Electronics 2021, 10(21), 2696; https://doi.org/10.3390/electronics10212696 - 4 Nov 2021
Cited by 12 | Viewed by 3614
Abstract
In the modern era, the implementation of chatbot can be used in various fields of science. This research will focus on the application of sentence classification using the News Aggregator Dataset that is used to test the model against the categories determined to [...] Read more.
In the modern era, the implementation of chatbot can be used in various fields of science. This research will focus on the application of sentence classification using the News Aggregator Dataset that is used to test the model against the categories determined to create the chatbot program. The results of the chatbot program trial by multimodal implementation applied four models (GRU, Bi-GRU, 1D CNN, 1D CNN Transpose) with six variations of parameters to produce the best results from the entire trial. The best test results from this research for the chatbot program using the 1D CNN Transpose model are the best models with detailed characteristics in this research, which produces an accuracy value of 0.9919. The test results on both types of chatbot are expected to produce sentence prediction results and precise and accurate detection results. The stages in making the program are explained in detail; therefore, it is hoped that program users can understand not only how to use the program by entering an input and receiving program output results that are explained in more detail in each sub-topic of this study. Full article
(This article belongs to the Special Issue Recent Trends in Intelligent Systems)
Show Figures

Figure 1

17 pages, 862 KB  
Article
Generalized Model Reference Scheduling and Control Co-Design with Guaranteed Performance
by Shunli Zhao, Cong Zhang and Lei Shao
Electronics 2021, 10(21), 2695; https://doi.org/10.3390/electronics10212695 - 4 Nov 2021
Cited by 1 | Viewed by 1885
Abstract
In this paper, a generalized model-reference scheduling (GMRS) scheme is proposed for a networked control system (NCS) with guaranteed performance control and medium access constraint (MAC). The GMRS presented reduces conservatism to some extent, and its performance is improved by adjusting the weighted [...] Read more.
In this paper, a generalized model-reference scheduling (GMRS) scheme is proposed for a networked control system (NCS) with guaranteed performance control and medium access constraint (MAC). The GMRS presented reduces conservatism to some extent, and its performance is improved by adjusting the weighted gain. In addition, two cases of uncertainty existing in the system matrix and control matrix are considered. For the first case, a co-design of guaranteed performance control and the GMRS is studied for the NCS with and without zero-order hold mechanism. For the second case, the uncertainty induced from time delay is considered in the guaranteed performance control and the GMRS co-design. Finally, illustrative examples are given to demonstrate the effectiveness of the proposed co-design schemes. Full article
(This article belongs to the Section Systems & Control Engineering)
Show Figures

Figure 1

11 pages, 8668 KB  
Article
Underground Imaging by Sub-Terahertz Radiation
by Yuan Zheng, Calvin Domier, Michelle Gonzalez, Neville C. Luhmann, Jr. and Diana Gamzina
Electronics 2021, 10(21), 2694; https://doi.org/10.3390/electronics10212694 - 4 Nov 2021
Viewed by 3813
Abstract
Sub-terahertz ground-penetrating radar systems offer an alternative to radio wave-based systems in the airborne imaging of buried objects. Laboratory prototype systems operating in W-band (75–110 GHz) and F-band (90–140 GHz) are presented, detecting the distance between target and source and imaging metal objects [...] Read more.
Sub-terahertz ground-penetrating radar systems offer an alternative to radio wave-based systems in the airborne imaging of buried objects. Laboratory prototype systems operating in W-band (75–110 GHz) and F-band (90–140 GHz) are presented, detecting the distance between target and source and imaging metal objects buried in mixed soil. The experimental results show that imaging in the 100–150 GHz frequency range is feasible for underground applications but significantly restricted by the attenuation characteristics of the medium covering the targets. A higher power source and more sensitive receiving components are essential to increase the penetration capability and expand the application settings of this approach. Full article
(This article belongs to the Special Issue Analysis and Test of Microwave Circuits and Subsystems)
Show Figures

Figure 1

9 pages, 2409 KB  
Article
Performance Enhancement of Photoconductive Antenna Using Saw-Toothed Plasmonic Contact Electrodes
by Xingyun Zhang, Fangyuan Zhan, Xianlong Wei, Wenlong He and Cunjun Ruan
Electronics 2021, 10(21), 2693; https://doi.org/10.3390/electronics10212693 - 4 Nov 2021
Cited by 4 | Viewed by 3398
Abstract
A photoconductive logarithmic spiral antenna with saw-toothed plasmonic contact electrodes is proposed to provide a higher terahertz radiation compared with the conventional photoconductive antenna (PCA). The use of saw-toothed plasmonic contact electrodes creates a strong electric field between the anode and cathode, which [...] Read more.
A photoconductive logarithmic spiral antenna with saw-toothed plasmonic contact electrodes is proposed to provide a higher terahertz radiation compared with the conventional photoconductive antenna (PCA). The use of saw-toothed plasmonic contact electrodes creates a strong electric field between the anode and cathode, which generates a larger photocurrent and thereby effectively increases the terahertz radiation. The proposed PCA was fabricated and measured in response to an 80 fs optical pump from a fiber-based femtosecond laser with a wavelength of 780 nm. When the proposed antenna is loaded with an optical pump power of 20 mW and a bias voltage of 40 V, a broadband pulsed terahertz radiation in the frequency range of 0.1–2 THz was observed. Compared to the conventional PCA, the THz power measured by terahertz time domain spectroscopy (THz-TDS) increased by an average of 10.45 times. Full article
(This article belongs to the Special Issue Terahertz Nanoantennas: Design and Applications)
Show Figures

Figure 1

11 pages, 4452 KB  
Article
Bidirectional Electric-Induced Conductance Based on GeTe/Sb2Te3 Interfacial Phase Change Memory for Neuro-Inspired Computing
by Shin-young Kang, Soo-min Jin, Ju-young Lee, Dae-seong Woo, Tae-hun Shim, In-ho Nam, Jea-gun Park, Yuji Sutou and Yun-heub Song
Electronics 2021, 10(21), 2692; https://doi.org/10.3390/electronics10212692 - 4 Nov 2021
Cited by 5 | Viewed by 2718
Abstract
Corresponding to the principles of biological synapses, an essential prerequisite for hardware neural networks using electronics devices is the continuous regulation of conductance. We implemented artificial synaptic characteristics in a (GeTe/Sb2Te3)16 iPCM with a superlattice structure under optimized [...] Read more.
Corresponding to the principles of biological synapses, an essential prerequisite for hardware neural networks using electronics devices is the continuous regulation of conductance. We implemented artificial synaptic characteristics in a (GeTe/Sb2Te3)16 iPCM with a superlattice structure under optimized identical pulse trains. By atomically controlling the Ge switch in the phase transition that appears in the GeTe/Sb2Te3 superlattice structure, multiple conductance states were implemented by applying the appropriate electrical pulses. Furthermore, we found that the bidirectional switching behavior of a (GeTe/Sb2Te3)16 iPCM can achieve a desired resistance level by using the pulse width. Therefore, we fabricated a Ge2Sb2Te5 PCM and designed a pulse scheme, which was based on the phase transition mechanism, to compare to the (GeTe/Sb2Te3)16 iPCM. We also designed an identical pulse scheme that implements both linear and symmetrical LTP and LTD, based on the iPCM mechanism. As a result, the (GeTe/Sb2Te3)16 iPCM showed relatively excellent synaptic characteristics by implementing a gradual conductance modulation, a nonlinearity value of 0.32, and 40 LTP/LTD conductance states by using identical pulse trains. Our results demonstrate the general applicability of the artificial synaptic device for potential use in neuro-inspired computing and next-generation, non-volatile memory. Full article
(This article belongs to the Special Issue New CMOS Devices and Their Applications II)
Show Figures

Figure 1

14 pages, 1476 KB  
Article
Multi-Task Learning with Task-Specific Feature Filtering in Low-Data Condition
by Sang-woo Lee, Ryong Lee, Min-seok Seo, Jong-chan Park, Hyeon-cheol Noh, Jin-gi Ju, Rae-young Jang, Gun-woo Lee, Myung-seok Choi and Dong-geol Choi
Electronics 2021, 10(21), 2691; https://doi.org/10.3390/electronics10212691 - 4 Nov 2021
Cited by 4 | Viewed by 3704
Abstract
Multi-task learning is a computationally efficient method to solve multiple tasks in one multi-task model, instead of multiple single-task models. MTL is expected to learn both diverse and shareable visual features from multiple datasets. However, MTL performances usually do not outperform single-task learning. [...] Read more.
Multi-task learning is a computationally efficient method to solve multiple tasks in one multi-task model, instead of multiple single-task models. MTL is expected to learn both diverse and shareable visual features from multiple datasets. However, MTL performances usually do not outperform single-task learning. Recent MTL methods tend to use heavy task-specific heads with large overheads to generate task-specific features. In this work, we (1) validate the efficacy of MTL in low-data conditions with early-exit architectures, and (2) propose a simple feature filtering module with minimal overheads to generate task-specific features. We assume that, in low-data conditions, the model cannot learn useful low-level features due to the limited amount of data. We empirically show that MTL can significantly improve performances in all tasks under low-data conditions. We further optimize the early-exit architecture by a sweep search on the optimal feature for each task. Furthermore, we propose a feature filtering module that selects features for each task. Using the optimized early-exit architecture with the feature filtering module, we improve the 15.937% in ImageNet and 4.847% in Places365 under the low-data condition where only 5% of the original datasets are available. Our method is empirically validated in various backbones and various MTL settings. Full article
(This article belongs to the Collection Computer Vision and Pattern Recognition Techniques)
Show Figures

Figure 1

16 pages, 2109 KB  
Article
Intelligent Performance Prediction: The Use Case of a Hadoop Cluster
by Dimitris Uzunidis, Panagiotis Karkazis, Chara Roussou, Charalampos Patrikakis and Helen C. Leligou
Electronics 2021, 10(21), 2690; https://doi.org/10.3390/electronics10212690 - 3 Nov 2021
Cited by 17 | Viewed by 2964
Abstract
The optimum utilization of infrastructural resources is a highly desired yet cumbersome task for service providers to achieve. This is because the optimal amount of such resources is a function of various parameters, such as the desired/agreed quality of service (QoS), the service [...] Read more.
The optimum utilization of infrastructural resources is a highly desired yet cumbersome task for service providers to achieve. This is because the optimal amount of such resources is a function of various parameters, such as the desired/agreed quality of service (QoS), the service characteristics/profile, workload and service life-cycle. The advent of frameworks that foresee the dynamic establishment and placement of service and network functions further contributes to a decrease in the effectiveness of traditional resource allocation methods. In this work, we address this problem by developing a mechanism which first performs service profiling and then a prediction of the resources that would lead to the desired QoS for each newly deployed service. The main elements of our approach are as follows: (a) the collection of data from all three layers of the deployed infrastructure (hardware, virtual and service), instead of a single layer of the deployed infrastructure, to provide a clearer picture on the potential system break points, (b) the study of well-known container based implementations following that microservice paradigm and (c) the use of a data analysis routine that employs a set of machine learning algorithms and performs accurate predictions of the required resources for any future service requests. We investigate the performance of the proposed framework using our open-source implementation to examine the case of a Hadoop cluster. The results show that running a small number of tests is adequate to assess the main system break points and at the same time to attain accurate resource predictions for any future request. Full article
Show Figures

Figure 1

43 pages, 5869 KB  
Review
Artificial Neural Networks Based Optimization Techniques: A Review
by Maher G. M. Abdolrasol, S. M. Suhail Hussain, Taha Selim Ustun, Mahidur R. Sarker, Mahammad A. Hannan, Ramizi Mohamed, Jamal Abd Ali, Saad Mekhilef and Abdalrhman Milad
Electronics 2021, 10(21), 2689; https://doi.org/10.3390/electronics10212689 - 3 Nov 2021
Cited by 538 | Viewed by 55713
Abstract
In the last few years, intensive research has been done to enhance artificial intelligence (AI) using optimization techniques. In this paper, we present an extensive review of artificial neural networks (ANNs) based optimization algorithm techniques with some of the famous optimization techniques, e.g., [...] Read more.
In the last few years, intensive research has been done to enhance artificial intelligence (AI) using optimization techniques. In this paper, we present an extensive review of artificial neural networks (ANNs) based optimization algorithm techniques with some of the famous optimization techniques, e.g., genetic algorithm (GA), particle swarm optimization (PSO), artificial bee colony (ABC), and backtracking search algorithm (BSA) and some modern developed techniques, e.g., the lightning search algorithm (LSA) and whale optimization algorithm (WOA), and many more. The entire set of such techniques is classified as algorithms based on a population where the initial population is randomly created. Input parameters are initialized within the specified range, and they can provide optimal solutions. This paper emphasizes enhancing the neural network via optimization algorithms by manipulating its tuned parameters or training parameters to obtain the best structure network pattern to dissolve the problems in the best way. This paper includes some results for improving the ANN performance by PSO, GA, ABC, and BSA optimization techniques, respectively, to search for optimal parameters, e.g., the number of neurons in the hidden layers and learning rate. The obtained neural net is used for solving energy management problems in the virtual power plant system. Full article
Show Figures

Figure 1

24 pages, 1200 KB  
Article
Survey on Machine Learning Algorithms Enhancing the Functional Verification Process
by Khaled A. Ismail and Mohamed A. Abd El Ghany
Electronics 2021, 10(21), 2688; https://doi.org/10.3390/electronics10212688 - 3 Nov 2021
Cited by 12 | Viewed by 6630
Abstract
The continuing increase in functional requirements of modern hardware designs means the traditional functional verification process becomes inefficient in meeting the time-to-market goal with sufficient level of confidence in the design. Therefore, the need for enhancing the process is evident. Machine learning (ML) [...] Read more.
The continuing increase in functional requirements of modern hardware designs means the traditional functional verification process becomes inefficient in meeting the time-to-market goal with sufficient level of confidence in the design. Therefore, the need for enhancing the process is evident. Machine learning (ML) models proved to be valuable for automating major parts of the process, which have typically occupied the bandwidth of engineers; diverting them from adding new coverage metrics to make the designs more robust. Current research of deploying different (ML) models prove to be promising in areas such as stimulus constraining, test generation, coverage collection and bug detection and localization. An example of deploying artificial neural network (ANN) in test generation shows 24.5× speed up in functionally verifying a dual-core RISC processor specification. Another study demonstrates how k-means clustering can reduce redundancy of simulation trace dump of an AHB-to-WHISHBONE bridge by 21%, thus reducing the debugging effort by not having to inspect unnecessary waveforms. The surveyed work demonstrates a comprehensive overview of current (ML) models enhancing the functional verification process from which an insight of promising future research areas is inferred. Full article
(This article belongs to the Section Industrial Electronics)
Show Figures

Figure 1

20 pages, 41669 KB  
Article
Feature-Based Interpretation of the Deep Neural Network
by Eun-Hun Lee and Hyeoncheol Kim
Electronics 2021, 10(21), 2687; https://doi.org/10.3390/electronics10212687 - 3 Nov 2021
Cited by 3 | Viewed by 2263
Abstract
The significant advantage of deep neural networks is that the upper layer can capture the high-level features of data based on the information acquired from the lower layer by stacking layers deeply. Since it is challenging to interpret what knowledge the neural network [...] Read more.
The significant advantage of deep neural networks is that the upper layer can capture the high-level features of data based on the information acquired from the lower layer by stacking layers deeply. Since it is challenging to interpret what knowledge the neural network has learned, various studies for explaining neural networks have emerged to overcome this problem. However, these studies generate the local explanation of a single instance rather than providing a generalized global interpretation of the neural network model itself. To overcome such drawbacks of the previous approaches, we propose the global interpretation method for the deep neural network through features of the model. We first analyzed the relationship between the input and hidden layers to represent the high-level features of the model, then interpreted the decision-making process of neural networks through high-level features. In addition, we applied network pruning techniques to make concise explanations and analyzed the effect of layer complexity on interpretability. We present experiments on the proposed approach using three different datasets and show that our approach could generate global explanations on deep neural network models with high accuracy and fidelity. Full article
(This article belongs to the Special Issue Advances in Data Mining and Knowledge Discovery)
Show Figures

Figure 1

16 pages, 13045 KB  
Article
Iterative Self-Tuning Minimum Variance Control of a Nonlinear Autonomous Underwater Vehicle Maneuvering Model
by Maria Tomas-Rodríguez, Elías Revestido Herrero and Francisco J. Velasco
Electronics 2021, 10(21), 2686; https://doi.org/10.3390/electronics10212686 - 3 Nov 2021
Viewed by 2467
Abstract
This paper addresses the problem of control design for a nonlinear maneuvering model of an autonomous underwater vehicle. The control algorithm is based on an iteration technique that approximates the original nonlinear model by a sequence of linear time-varying equations equivalent to the [...] Read more.
This paper addresses the problem of control design for a nonlinear maneuvering model of an autonomous underwater vehicle. The control algorithm is based on an iteration technique that approximates the original nonlinear model by a sequence of linear time-varying equations equivalent to the original nonlinear problem and a self-tuning control method so that the controller is designed at each time point on the interval for trajectory tracking and heading angle control. This work makes use of self-tuning minimum variance principles. The benefit of this approach is that the nonlinearities and couplings of the system are preserved, unlike in the cases of control design based on linearized systems, reducing in this manner the uncertainty in the model and increasing the robustness of the controller. The simulations here presented use a torpedo-shaped underwater vehicle model and show the good performance of the controller and accurate tracking for certain maneuvering cases. Full article
(This article belongs to the Special Issue Advances in Autonomous Control Systems and Their Applications)
Show Figures

Figure 1

13 pages, 868 KB  
Article
Ciphertext-Policy Attribute-Based Encryption with Outsourced Set Intersection in Multimedia Cloud Computing
by Yanfeng Shi and Shuo Qiu
Electronics 2021, 10(21), 2685; https://doi.org/10.3390/electronics10212685 - 3 Nov 2021
Cited by 3 | Viewed by 2067
Abstract
In a multimedia cloud computing system, suppose all cloud users outsource their own data sets to the cloud in the encrypted form. Each outsourced set is associated with an access structure such that a valid data user, Bob, with the credentials satisfying the [...] Read more.
In a multimedia cloud computing system, suppose all cloud users outsource their own data sets to the cloud in the encrypted form. Each outsourced set is associated with an access structure such that a valid data user, Bob, with the credentials satisfying the access structure is able to conduct computing over outsourced encrypted set (e.g., decryption or other kinds of computing function). Suppose Bob needs to compute the set intersection over a data owner Alice’s and his own outsourced encrypted sets. Bob’s simple solution is to download Alice’s and Bob’s outsourced encrypted sets, perform set intersection operation, and decrypt the set intersection ciphertexts. A better solution is for Bob to delegate the cloud to calculate the set intersection, without giving the cloud any ability in breaching the secrecy of the sets. To solve this problem, this work introduces a novel primitive called ciphertext-policy attribute-based encryption with outsourced set intersection for multimedia cloud computing. It is the first cryptographic algorithm supporting a fully outsourced encrypted storage, computation delegation, fine-grained authorization security for ciphertext-policy model, without relying on an online trusted authority or data owners, and multi-elements set, simultaneously. We construct a scheme that provably satisfies the desirable security properties, and analyze its efficiency. Full article
(This article belongs to the Special Issue Multimedia Processing: Challenges and Prospects)
Show Figures

Figure 1

16 pages, 2378 KB  
Article
A Comprehensive Modeling of the Discrete and Dynamic Problem of Berth Allocation in Maritime Terminals
by Sami Mnasri and Malek Alrashidi
Electronics 2021, 10(21), 2684; https://doi.org/10.3390/electronics10212684 - 3 Nov 2021
Cited by 12 | Viewed by 3352
Abstract
In this study, the discrete and dynamic problem of berth allocation in maritime terminals, is investigated. The suggested resolution method relies on a paradigm of optimization with two techniques: heuristic and multi-agent. Indeed, a set of techniques such as the protocol of negotiation [...] Read more.
In this study, the discrete and dynamic problem of berth allocation in maritime terminals, is investigated. The suggested resolution method relies on a paradigm of optimization with two techniques: heuristic and multi-agent. Indeed, a set of techniques such as the protocol of negotiation named contract net, the multi-agent interactions, and Worst-Fit arrangement technique, are involved. The main objective of the study is to propose a solution for attributing m parallel machines to a set of activities. The contribution of the study is to provide a detailed modeling of the discrete and dynamic berth allocation problem by establishing the corresponding models using a multi-agent methodology. A set of numerical experiments are detailed to prove the performance of the introduced multi-agent strategy compared with genetic algorithm and tabu search. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

2 pages, 149 KB  
Editorial
Grid-Connected and Isolated Renewable Energy Systems
by Xiaoqiang Guo, Minh-Khai Nguyen, Mariusz Malinowski and Elisabetta Tedeschi
Electronics 2021, 10(21), 2683; https://doi.org/10.3390/electronics10212683 - 3 Nov 2021
Cited by 1 | Viewed by 1558
Abstract
With the rapid progression of renewable energies into grids, grid-connected systems are increasing dramatically around the world [...] Full article
(This article belongs to the Special Issue Grid-Connected and Isolated Renewable Energy Systems)