Editor’s Choice Articles

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 1646 KiB  
Article
On the Sizing of CMOS Operational Amplifiers by Applying Many-Objective Optimization Algorithms
by Martín Alejandro Valencia-Ponce, Esteban Tlelo-Cuautle and Luis Gerardo de la Fraga
Electronics 2021, 10(24), 3148; https://doi.org/10.3390/electronics10243148 - 17 Dec 2021
Cited by 23 | Viewed by 5438
Abstract
In CMOS integrated circuit (IC) design, operational amplifiers are one of the most useful active devices to enhance applications in analog signal processing, signal conditioning and so on. However, due to the CMOS technology downscaling, along the very large number of design variables [...] Read more.
In CMOS integrated circuit (IC) design, operational amplifiers are one of the most useful active devices to enhance applications in analog signal processing, signal conditioning and so on. However, due to the CMOS technology downscaling, along the very large number of design variables and their trade-offs, it results difficult to reach target specifications without the application of optimization methods. For this reason, this work shows the advantages of performing many-objective optimization and this algorithm is compared to the well-known mono- and multi-objective metaheuristics, which have demonstrated their usefulness in sizing CMOS ICs. Three CMOS operational transconductance amplifiers are the case study in this work; they were sized by applying mono-, multi- and many-objective algorithms. The well-known non-dominated sorting genetic algorithm version 3 (NSGA-III) and the many-objective metaheuristic-based on the R2 indicator (MOMBI-II) were applied to size CMOS amplifiers and their sized solutions were compared to mono- and multi-objective algorithms. The CMOS amplifiers were optimized considering five targets, associated to a figure of merit (FoM), differential gain, power consumption, common-mode rejection ratio and total silicon area. The designs were performed using UMC 180 nm CMOS technology. To show the advantage of applying many-objective optimization algorithms to size CMOS amplifiers, the amplifier with the best performance was used to design a fractional-order integrator based on OTA-C filters. A variation analysis considering the process, the voltage and temperature (PVT) and a Monte Carlo analysis were performed to verify design robustness. Finally, the OTA-based fractional-order integrator was used to design a fractional-order chaotic oscillator, showing good agreement between numerical and SPICE simulations. Full article
(This article belongs to the Special Issue Feature Papers in Circuit and Signal Processing)
Show Figures

Figure 1

26 pages, 40554 KiB  
Article
Design and Implementation of a Smart Energy Meter Using a LoRa Network in Real Time
by Francisco Sánchez-Sutil, Antonio Cano-Ortega and Jesús C. Hernández
Electronics 2021, 10(24), 3152; https://doi.org/10.3390/electronics10243152 - 17 Dec 2021
Cited by 13 | Viewed by 8037
Abstract
Nowadays, the development, implementation and deployment of smart meters (SMs) is increasing in importance, and its expansion is exponential. The use of SMs in electrical engineering covers a multitude of applications ranging from real-time monitoring to the study of load profiles in homes. [...] Read more.
Nowadays, the development, implementation and deployment of smart meters (SMs) is increasing in importance, and its expansion is exponential. The use of SMs in electrical engineering covers a multitude of applications ranging from real-time monitoring to the study of load profiles in homes. The use of wireless technologies has helped this development. Various problems arise in the implementation of SMs, such as coverage, locations without Internet access, etc. LoRa (long range) technology has great coverage and equipment with low power consumption that allows the installation of SMs in all types of locations, including those without Internet access. The objective of this research is to create an SM network under the LoRa specification that solves the problems presented by other wireless networks. For this purpose, a gateway for residential electricity metering networks using LoRa (GREMNL) and an electrical variable measuring device for households using LoRa (EVMDHL) have been created, which allow the development of SM networks with large coverage and low consumption. Full article
(This article belongs to the Special Issue 10th Anniversary of Electronics: Recent Advances in Power Electronics)
Show Figures

Figure 1

17 pages, 1500 KiB  
Article
Factors Affecting Students’ Acceptance of Mobile Learning Application in Higher Education during COVID-19 Using ANN-SEM Modelling Technique
by Mohammed Amin Almaiah, Enas Musa Al-lozi, Ahmad Al-Khasawneh, Rima Shishakly and Mirna Nachouki
Electronics 2021, 10(24), 3121; https://doi.org/10.3390/electronics10243121 - 15 Dec 2021
Cited by 27 | Viewed by 5571
Abstract
Due to the COVID-19 pandemic, most universities around the world started to employ distance-learning tools. To cope with these emergency conditions, some universities in Jordan have developed “mobile learning platforms” as a new tool for distance teaching and learning for students. This experience [...] Read more.
Due to the COVID-19 pandemic, most universities around the world started to employ distance-learning tools. To cope with these emergency conditions, some universities in Jordan have developed “mobile learning platforms” as a new tool for distance teaching and learning for students. This experience in Jordan is still new and needs to be evaluated in order to identify its advantages and challenges. Therefore, this study aims to investigate students’ perceptions about mobile learning platforms as well as to identify the crucial factors that influence students’ use of mobile learning platforms. An online quantitative survey technique using Twitter was employed to collect the data. A two-staged ANN-SEM modelling technique was adopted to analyze the causal relationships among constructs in the research model. The results of the study indicate that content quality and service quality significantly influenced perceived usefulness of mobile learning platforms. In addition, perceived ease of use and perceived usefulness significantly influenced behavioral intention to use mobile learning platforms. The study findings provide useful suggestions for decision makers, service providers, developers, and designers in the ministry of education as to how to assess and enhance mobile learning platform quality and understanding of multidimensional factors for effectively using mobile learning platforms. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

21 pages, 3653 KiB  
Article
QoS-Ledger: Smart Contracts and Metaheuristic for Secure Quality-of-Service and Cost-Efficient Scheduling of Medical-Data Processing
by Abdullah Ayub Khan, Zaffar Ahmed Shaikh, Laura Baitenova, Lyailya Mutaliyeva, Nikita Moiseev, Alexey Mikhaylov, Asif Ali Laghari, Sahar Ahmed Idris and Hammam Alshazly
Electronics 2021, 10(24), 3083; https://doi.org/10.3390/electronics10243083 - 10 Dec 2021
Cited by 85 | Viewed by 4905
Abstract
Quality-of-service (QoS) is the term used to evaluate the overall performance of a service. In healthcare applications, efficient computation of QoS is one of the mandatory requirements during the processing of medical records through smart measurement methods. Medical services often involve the transmission [...] Read more.
Quality-of-service (QoS) is the term used to evaluate the overall performance of a service. In healthcare applications, efficient computation of QoS is one of the mandatory requirements during the processing of medical records through smart measurement methods. Medical services often involve the transmission of demanding information. Thus, there are stringent requirements for secure, intelligent, public-network quality-of-service. This paper contributes to three different aspects. First, we propose a novel metaheuristic approach for medical cost-efficient task schedules, where an intelligent scheduler manages the tasks, such as the rate of service schedule, and lists items utilized by users during the data processing and computation through the fog node. Second, the QoS efficient-computation algorithm, which effectively monitors performance according to the indicator (parameter) with the analysis mechanism of quality-of-experience (QoE), has been developed. Third, a framework of blockchain-distributed technology-enabled QoS (QoS-ledger) computation in healthcare applications is proposed in a permissionless public peer-to-peer (P2P) network, which stores medical processed information in a distributed ledger. We have designed and deployed smart contracts for secure medical-data transmission and processing in serverless peering networks and handled overall node-protected interactions and preserved logs in a blockchain distributed ledger. The simulation result shows that QoS is computed on the blockchain public network with transmission power = average of −10 to −17 dBm, jitter = 34 ms, delay = average of 87 to 95 ms, throughput = 185 bytes, duty cycle = 8%, route of delivery and response back variable. Thus, the proposed QoS-ledger is a potential candidate for the computation of quality-of-service that is not limited to e-healthcare distributed applications. Full article
(This article belongs to the Special Issue Security and Privacy in Blockchain/IoT)
Show Figures

Figure 1

23 pages, 8752 KiB  
Article
Hyperledger Healthchain: Patient-Centric IPFS-Based Storage of Health Records
by Vinodhini Mani, Prakash Manickam, Youseef Alotaibi, Saleh Alghamdi and Osamah Ibrahim Khalaf
Electronics 2021, 10(23), 3003; https://doi.org/10.3390/electronics10233003 - 2 Dec 2021
Cited by 116 | Viewed by 10958
Abstract
Blockchain-based electronic health system growth is hindered by privacy, confidentiality, and security. By protecting against them, this research aims to develop cybersecurity measurement approaches to ensure the security and privacy of patient information using blockchain technology in healthcare. Blockchains need huge resources to [...] Read more.
Blockchain-based electronic health system growth is hindered by privacy, confidentiality, and security. By protecting against them, this research aims to develop cybersecurity measurement approaches to ensure the security and privacy of patient information using blockchain technology in healthcare. Blockchains need huge resources to store big data. This paper presents an innovative solution, namely patient-centric healthcare data management (PCHDM). It comprises the following: (i) in an on-chain health record database, hashes of health records are stored as health record chains in Hyperledger fabric, and (ii) off-chain solutions that encrypt actual health data and store it securely over the interplanetary file system (IPFS) which is the decentralized cloud storage system that ensures scalability, confidentiality, and resolves the problem of blockchain data storage. A security smart contract hosted through container technology with Byzantine Fault Tolerance consensus ensures patient privacy by verifying patient preferences before sharing health records. The Distributed Ledger technology performance is tested under hyper ledger caliper benchmarks in terms of transaction latency, resource utilization, and transaction per second. The model provides stakeholders with increased confidence in collaborating and sharing their health records. Full article
(This article belongs to the Special Issue Blockchain Based Electronic Healthcare Solution and Security)
Show Figures

Figure 1

16 pages, 1777 KiB  
Article
Using Stochastic Computing for Virtual Screening Acceleration
by Christiam F. Frasser, Carola de Benito, Erik S. Skibinsky-Gitlin, Vincent Canals, Joan Font-Rosselló, Miquel Roca, Pedro J. Ballester and Josep L. Rosselló
Electronics 2021, 10(23), 2981; https://doi.org/10.3390/electronics10232981 - 30 Nov 2021
Cited by 3 | Viewed by 2768
Abstract
Stochastic computing is an emerging scientific field pushed by the need for developing high-performance artificial intelligence systems in hardware to quickly solve complex data processing problems. This is the case of virtual screening, a computational task aimed at searching across huge molecular databases [...] Read more.
Stochastic computing is an emerging scientific field pushed by the need for developing high-performance artificial intelligence systems in hardware to quickly solve complex data processing problems. This is the case of virtual screening, a computational task aimed at searching across huge molecular databases for new drug leads. In this work, we show a classification framework in which molecules are described by an energy-based vector. This vector is then processed by an ultra-fast artificial neural network implemented through FPGA by using stochastic computing techniques. Compared to other previously published virtual screening methods, this proposal provides similar or higher accuracy, while it improves processing speed by about two or three orders of magnitude. Full article
(This article belongs to the Section Artificial Intelligence Circuits and Systems (AICAS))
Show Figures

Figure 1

18 pages, 778 KiB  
Review
The Contribution of Machine Learning and Eye-Tracking Technology in Autism Spectrum Disorder Research: A Systematic Review
by Konstantinos-Filippos Kollias, Christine K. Syriopoulou-Delli, Panagiotis Sarigiannidis and George F. Fragulis
Electronics 2021, 10(23), 2982; https://doi.org/10.3390/electronics10232982 - 30 Nov 2021
Cited by 38 | Viewed by 8419
Abstract
Early and objective autism spectrum disorder (ASD) assessment, as well as early intervention are particularly important and may have long term benefits in the lives of ASD people. ASD assessment relies on subjective rather on objective criteria, whereas advances in research point to [...] Read more.
Early and objective autism spectrum disorder (ASD) assessment, as well as early intervention are particularly important and may have long term benefits in the lives of ASD people. ASD assessment relies on subjective rather on objective criteria, whereas advances in research point to up-to-date procedures for early ASD assessment comprising eye-tracking technology, machine learning, as well as other assessment tools. This systematic review, the first to our knowledge of its kind, provides a comprehensive discussion of 30 studies irrespective of the stimuli/tasks and dataset used, the algorithms applied, the eye-tracking tools utilised and their goals. Evidence indicates that the combination of machine learning and eye-tracking technology could be considered a promising tool in autism research regarding early and objective diagnosis. Limitations and suggestions for future research are also presented. Full article
Show Figures

Figure 1

20 pages, 2708 KiB  
Article
Model-Based Design of an Improved Electric Drive Controller for High-Precision Applications Based on Feedback Linearization Technique
by Pierpaolo Dini and Sergio Saponara
Electronics 2021, 10(23), 2954; https://doi.org/10.3390/electronics10232954 - 28 Nov 2021
Cited by 26 | Viewed by 3023
Abstract
This paper presents the design flow of an advanced non-linear control strategy, able to absorb the effects that the main causes of torque oscillations, concerning synchronous electrical drives, cause on the positioning of the end-effector of a manipulator robot. The control technique used [...] Read more.
This paper presents the design flow of an advanced non-linear control strategy, able to absorb the effects that the main causes of torque oscillations, concerning synchronous electrical drives, cause on the positioning of the end-effector of a manipulator robot. The control technique used requires an exhaustive modelling of the physical phenomena that cause the electromagnetic torque oscillations. In particular, the Cogging and Stribeck effects are taken into account, whose mathematical model is incorporated in the whole system of dynamic equations representing the complex mechatronic system, formed by the mechanics of the robot links and the dynamics of the actuators. Both the modelling procedure of the robot, directly incorporating the dynamics of the actuators and the electrical drive, consisting of the modulation system and inverter, and the systematic procedure necessary to obtain the equations of the components of the control vector are described in detail. Using the Processor-In-the-Loop (PIL) paradigm for a Cortex-A53 based embedded system, the beneficial effect of the proposed advanced control strategy is validated in terms of end-effector position control, in which we compare classic control system with the proposed algorithm, in order to highlight the better performance in precision and in reducing oscillations. Full article
(This article belongs to the Special Issue Operation and Control of Power Systems)
Show Figures

Figure 1

8 pages, 585 KiB  
Article
Improving the Sensitivity of Chipless RFID Sensors: The Case of a Low-Humidity Sensor
by Giada Marchi, Viviana Mulloni, Omar Hammad Ali, Leandro Lorenzelli and Massimo Donelli
Electronics 2021, 10(22), 2861; https://doi.org/10.3390/electronics10222861 - 20 Nov 2021
Cited by 18 | Viewed by 3072
Abstract
This study is supposed to introduce a valid strategy for increasing the sensitivity of chipless radio frequency identification (RFID) encoders. The idea is to properly select the dielectric substrate in order to enhance the contribution of the sensitive layer and to maximize the [...] Read more.
This study is supposed to introduce a valid strategy for increasing the sensitivity of chipless radio frequency identification (RFID) encoders. The idea is to properly select the dielectric substrate in order to enhance the contribution of the sensitive layer and to maximize the frequency shift of the resonance peak. The specific case of a chipless sensor suitable for the detection of humidity in low-humidity regimes will be investigated both with numerical and experimental tests. Full article
(This article belongs to the Special Issue Advances in Chipless RFID Technology)
Show Figures

Figure 1

12 pages, 2298 KiB  
Tutorial
Development and Application of an Augmented Reality Oyster Learning System for Primary Marine Education
by Min-Chai Hsieh
Electronics 2021, 10(22), 2818; https://doi.org/10.3390/electronics10222818 - 17 Nov 2021
Cited by 24 | Viewed by 3700
Abstract
Marine knowledge is such an important part of education that it has been integrated into various subjects and courses across educational levels. Previous research has indicated the importance of AR assisted students’ learning during the learning process. This study proposed the AR Oyster [...] Read more.
Marine knowledge is such an important part of education that it has been integrated into various subjects and courses across educational levels. Previous research has indicated the importance of AR assisted students’ learning during the learning process. This study proposed the AR Oyster Learning System (AROLS) that integrates mobile AR with a marine education teaching strategy for teachers in primary schools. To evaluate the effectiveness of the proposed approach, an experiment was conducted in a primary school natural science course regarding oysters. The participants consisted of 22 fourth-grade students. The experimental group comprised 11 students who learned with the AROLS learning approach and the control group comprised 11 students who learned with the conventional multimedia learning approach. The results indicate that (1) students were interested in the AR learning approach, (2) students’ learning achievement and motivation were improved by the AR learning approach, (3) students acquired the target knowledge through the oyster course, and (4) students learned the importance of sustainability when taking online courses at home during the pandemic. Full article
(This article belongs to the Special Issue Virtual Reality and Scientific Visualization)
Show Figures

Figure 1

22 pages, 6511 KiB  
Article
Power Conversion System Operation Algorithm for Efficient Energy Management of Microgrids
by Kwang-Su Na, Jeong Lee, Jun-Mo Kim, Yoon-Seong Lee, Junsin Yi and Chung-Yuen Won
Electronics 2021, 10(22), 2791; https://doi.org/10.3390/electronics10222791 - 14 Nov 2021
Cited by 6 | Viewed by 3242
Abstract
This paper investigates the operation of each power conversion system (PCS) for efficient energy management systems (EMSs) of microgrids (MGs). When MGs are linked to renewable energy sources (RESs), the reduction in power conversion efficiency can be minimized. Furthermore, energy storage systems (ESSs) [...] Read more.
This paper investigates the operation of each power conversion system (PCS) for efficient energy management systems (EMSs) of microgrids (MGs). When MGs are linked to renewable energy sources (RESs), the reduction in power conversion efficiency can be minimized. Furthermore, energy storage systems (ESSs) are utilized to manage the surplus power of RESs. Thus, the present work presents a method to minimize the use of the existing power grid and increase the utilization rate of energy generated through RESs. To minimize the use of the existing power grid, a PCS operation method for photovoltaics (PV) and ESS used in MGs is proposed. PV, when it is directly connected as an intermittent energy source, induces voltage fluctuations in the distribution network. Thus, to overcome this shortcoming, this paper utilizes a system that connects PV and a distributed energy storage system (DESS). A PV-DESS integrated module is designed and controlled for tracking constant power. In addition, the DESS serves to compensate for the insufficient power generation of PV. The main energy storage systems (MESSs) used in MGs affect all aspects of the power management in the system. Because MGs perform their operations based on the capacity of the MESS, a PCS designed with a large capacity is utilized to stably operate the system. Because the MESS performs energy management through operations under various load conditions, it must have constant efficiency under all load conditions. Therefore, this paper proposes a PCS operation algorithm with constant efficiency for the MESS. Utilizing the operation algorithm of each PCS, this paper describes the efficient energy management of the MG and further proposes an algorithm for operating the existing power grid at the minimum level. Full article
(This article belongs to the Section Power Electronics)
Show Figures

Figure 1

25 pages, 2015 KiB  
Review
Stock Market Prediction Using Machine Learning Techniques: A Decade Survey on Methodologies, Recent Developments, and Future Directions
by Nusrat Rouf, Majid Bashir Malik, Tasleem Arif, Sparsh Sharma, Saurabh Singh, Satyabrata Aich and Hee-Cheol Kim
Electronics 2021, 10(21), 2717; https://doi.org/10.3390/electronics10212717 - 8 Nov 2021
Cited by 136 | Viewed by 61060
Abstract
With the advent of technological marvels like global digitization, the prediction of the stock market has entered a technologically advanced era, revamping the old model of trading. With the ceaseless increase in market capitalization, stock trading has become a center of investment for [...] Read more.
With the advent of technological marvels like global digitization, the prediction of the stock market has entered a technologically advanced era, revamping the old model of trading. With the ceaseless increase in market capitalization, stock trading has become a center of investment for many financial investors. Many analysts and researchers have developed tools and techniques that predict stock price movements and help investors in proper decision-making. Advanced trading models enable researchers to predict the market using non-traditional textual data from social platforms. The application of advanced machine learning approaches such as text data analytics and ensemble methods have greatly increased the prediction accuracies. Meanwhile, the analysis and prediction of stock markets continue to be one of the most challenging research areas due to dynamic, erratic, and chaotic data. This study explains the systematics of machine learning-based approaches for stock market prediction based on the deployment of a generic framework. Findings from the last decade (2011–2021) were critically analyzed, having been retrieved from online digital libraries and databases like ACM digital library and Scopus. Furthermore, an extensive comparative analysis was carried out to identify the direction of significance. The study would be helpful for emerging researchers to understand the basics and advancements of this emerging area, and thus carry-on further research in promising directions. Full article
Show Figures

Figure 1

12 pages, 1953 KiB  
Article
IoT and Cloud Computing in Health-Care: A New Wearable Device and Cloud-Based Deep Learning Algorithm for Monitoring of Diabetes
by Ahmed R. Nasser, Ahmed M. Hasan, Amjad J. Humaidi, Ahmed Alkhayyat, Laith Alzubaidi, Mohammed A. Fadhel, José Santamaría and Ye Duan
Electronics 2021, 10(21), 2719; https://doi.org/10.3390/electronics10212719 - 8 Nov 2021
Cited by 76 | Viewed by 6397
Abstract
Diabetes is a chronic disease that can affect human health negatively when the glucose levels in the blood are elevated over the creatin range called hyperglycemia. The current devices for continuous glucose monitoring (CGM) supervise the glucose level in the blood and alert [...] Read more.
Diabetes is a chronic disease that can affect human health negatively when the glucose levels in the blood are elevated over the creatin range called hyperglycemia. The current devices for continuous glucose monitoring (CGM) supervise the glucose level in the blood and alert user to the type-1 Diabetes class once a certain critical level is surpassed. This can lead the body of the patient to work at critical levels until the medicine is taken in order to reduce the glucose level, consequently increasing the risk of causing considerable health damages in case of the intake is delayed. To overcome the latter, a new approach based on cutting-edge software and hardware technologies is proposed in this paper. Specifically, an artificial intelligence deep learning (DL) model is proposed to predict glucose levels in 30 min horizons. Moreover, Cloud computing and IoT technologies are considered to implement the prediction model and combine it with the existing wearable CGM model to provide the patients with the prediction of future glucose levels. Among the many DL methods in the state-of-the-art (SoTA) have been considered a cascaded RNN-RBM DL model based on both recurrent neural networks (RNNs) and restricted Boltzmann machines (RBM) due to their superior properties regarding improved prediction accuracy. From the conducted experimental results, it has been shown that the proposed Cloud&DL-based wearable approach achieves an average accuracy value of 15.589 in terms of RMSE, then outperforms similar existing blood glucose prediction methods in the SoTA. Full article
(This article belongs to the Special Issue New Technological Advancements and Applications of Deep Learning)
Show Figures

Figure 1

16 pages, 3851 KiB  
Article
Predicting Regional Outbreaks of Hepatitis A Using 3D LSTM and Open Data in Korea
by Kwangok Lee, Munkyu Lee and Inseop Na
Electronics 2021, 10(21), 2668; https://doi.org/10.3390/electronics10212668 - 31 Oct 2021
Cited by 5 | Viewed by 2269
Abstract
In 2020 and 2021, humanity lived in fear due to the COVID-19 pandemic. However, with the development of artificial intelligence technology, mankind is attempting to tackle many challenges from currently unpredictable epidemics. Korean society has been exposed to various infectious diseases since the [...] Read more.
In 2020 and 2021, humanity lived in fear due to the COVID-19 pandemic. However, with the development of artificial intelligence technology, mankind is attempting to tackle many challenges from currently unpredictable epidemics. Korean society has been exposed to various infectious diseases since the Korean War in 1950, and to overcome them, the six most serious cases in National Notifiable Infectious Diseases (NNIDs) category I were defined. Although most infectious diseases have been overcome, viral hepatitis A has been on the rise in Korean society since 2010. Therefore, in this paper, the prediction of viral hepatitis A, which is rapidly spreading in Korean society, was predicted by region using the deep learning technique and a publicly available dataset. For this study, we gathered information from five organizations based on the open data policy: Korea Centers for Disease Control and Prevention (KCDC), National Institute of Environmental Research (NIER), Korea Meteorological Agency (KMA), Public Open Data Portal, and Korea Environment Corporation (KECO). Patient information, water environment information, weather information, population information, and air pollution information were acquired and correlations were identified. Next, an epidemic outbreak prediction was performed using data preprocessing and 3D LSTM. The experimental results were compared with various machine learning methods through RMSE. In this paper, we attempted to predict regional epidemic outbreaks of hepatitis A by linking the open data environment with deep learning. It is expected that the experimental process and results will be used to present the importance and usefulness of establishing an open data environment. Full article
(This article belongs to the Special Issue Machine Learning in Electronic and Biomedical Engineering)
Show Figures

Figure 1

23 pages, 2809 KiB  
Article
Examining the Factors Influencing the Mobile Learning Applications Usage in Higher Education during the COVID-19 Pandemic
by Ahmad Althunibat, Mohammed Amin Almaiah and Feras Altarawneh
Electronics 2021, 10(21), 2676; https://doi.org/10.3390/electronics10212676 - 31 Oct 2021
Cited by 53 | Viewed by 6056
Abstract
Recently, the emergence of the COVID-19 has caused a high acceleration towards the use of mobile learning applications in learning and education. Investigation of the adoption of mobile learning still needs more research. Therefore, this study seeks to understand the influencing factors of [...] Read more.
Recently, the emergence of the COVID-19 has caused a high acceleration towards the use of mobile learning applications in learning and education. Investigation of the adoption of mobile learning still needs more research. Therefore, this study seeks to understand the influencing factors of mobile learning adoption in higher education by employing the Information System Success Model (ISS). The proposed model is evaluated through an SEM approach. Subsequently, the findings show that the proposed research model of this study could explain 63.9% of the variance in the actual use of mobile learning systems, which offers important insight for understanding the impact of educational, environmental, and quality factors on mobile learning system actual use. The findings also indicate that institutional policy, change management, and top management support have positive effects on the actual use of mobile learning systems, mediated by quality factors. Furthermore, the results indicate that factors of functionality, design quality, and usability have positive effects on the actual use of mobile learning systems, mediated by student satisfaction. The findings of this study provide practical suggestions, for designers, developers, and decision makers in universities, on how to enhance the use of mobile learning applications and thus derive greater benefits from mobile learning systems. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

12 pages, 6817 KiB  
Article
A Convolutional Neural Network-Based End-to-End Self-Driving Using LiDAR and Camera Fusion: Analysis Perspectives in a Real-World Environment
by Mingyu Park, Hyeonseok Kim and Seongkeun Park
Electronics 2021, 10(21), 2608; https://doi.org/10.3390/electronics10212608 - 26 Oct 2021
Cited by 10 | Viewed by 5553
Abstract
In this paper, we develop end-to-end autonomous driving based on a 2D LiDAR sensor and camera sensor that predict the control value of the vehicle from the input data, instead of modeling rule-based autonomous driving. Different from many studies utilizing simulated data, we [...] Read more.
In this paper, we develop end-to-end autonomous driving based on a 2D LiDAR sensor and camera sensor that predict the control value of the vehicle from the input data, instead of modeling rule-based autonomous driving. Different from many studies utilizing simulated data, we created an end-to-end autonomous driving algorithm with data obtained from real driving and analyzing the performance of our proposed algorithm. Based on the data obtained from an actual urban driving environment, end-to-end autonomous driving was possible in an informal environment such as a traffic signal by predicting the vehicle control value based on a convolution neural network. In addition, this paper solves the data imbalance problem by eliminating redundant data for each frame during stopping and driving in the driving environment so we can improve the performance of self-driving. Finally, we verified through the activation map how the network predicts the vertical and horizontal control values by recognizing the traffic facilities in the driving environment. Experiments and analysis will be shown to show the validity of the proposed algorithm. Full article
(This article belongs to the Special Issue AI-Based Autonomous Driving System)
Show Figures

Figure 1

27 pages, 3025 KiB  
Article
Integration Strategy and Tool between Formal Ontology and Graph Database Technology
by Stefano Ferilli
Electronics 2021, 10(21), 2616; https://doi.org/10.3390/electronics10212616 - 26 Oct 2021
Cited by 27 | Viewed by 4148
Abstract
Ontologies, and especially formal ones, have traditionally been investigated as a means to formalize an application domain so as to carry out automated reasoning on it. The union of the terminological part of an ontology and the corresponding assertional part is known as [...] Read more.
Ontologies, and especially formal ones, have traditionally been investigated as a means to formalize an application domain so as to carry out automated reasoning on it. The union of the terminological part of an ontology and the corresponding assertional part is known as a Knowledge Graph. On the other hand, database technology has often focused on the optimal organization of data so as to boost efficiency in their storage, management and retrieval. Graph databases are a recent technology specifically focusing on element-driven data browsing rather than on batch processing. While the complementarity and connections between these technologies are patent and intuitive, little exists to bring them to full integration and cooperation. This paper aims at bridging this gap, by proposing an intermediate format that can be easily mapped onto the formal ontology on one hand, so as to allow complex reasoning, and onto the graph database on the other, so as to benefit from efficient data handling. Full article
(This article belongs to the Special Issue Knowledge Engineering and Data Mining)
Show Figures

Figure 1

36 pages, 1998 KiB  
Review
Analog Gaussian Function Circuit: Architectures, Operating Principles and Applications
by Vassilis Alimisis, Marios Gourdouparis, Georgios Gennis, Christos Dimas and Paul P. Sotiriadis
Electronics 2021, 10(20), 2530; https://doi.org/10.3390/electronics10202530 - 17 Oct 2021
Cited by 22 | Viewed by 6171
Abstract
This review paper explores existing architectures, operating principles, performance metrics and applications of analog Gaussian function circuits. Architectures based on the translinear principle, the bulk-controlled approach, the floating gate approach, the use of multiple differential pairs, compositions of different fundamental blocks and others [...] Read more.
This review paper explores existing architectures, operating principles, performance metrics and applications of analog Gaussian function circuits. Architectures based on the translinear principle, the bulk-controlled approach, the floating gate approach, the use of multiple differential pairs, compositions of different fundamental blocks and others are considered. Applications involving analog implementations of Machine Learning algorithms, neuromorphic circuits, smart sensor systems and fuzzy/neuro-fuzzy systems are discussed, focusing on the role of the Gaussian function circuit. Finally, a general discussion and concluding remarks are provided. Full article
Show Figures

Figure 1

24 pages, 1152 KiB  
Review
Artificial Intelligence-Based Decision-Making Algorithms, Internet of Things Sensing Networks, and Deep Learning-Assisted Smart Process Management in Cyber-Physical Production Systems
by Mihai Andronie, George Lăzăroiu, Mariana Iatagan, Cristian Uță, Roxana Ștefănescu and Mădălina Cocoșatu
Electronics 2021, 10(20), 2497; https://doi.org/10.3390/electronics10202497 - 14 Oct 2021
Cited by 197 | Viewed by 16622
Abstract
With growing evidence of deep learning-assisted smart process planning, there is an essential demand for comprehending whether cyber-physical production systems (CPPSs) are adequate in managing complexity and flexibility, configuring the smart factory. In this research, prior findings were cumulated indicating that the interoperability [...] Read more.
With growing evidence of deep learning-assisted smart process planning, there is an essential demand for comprehending whether cyber-physical production systems (CPPSs) are adequate in managing complexity and flexibility, configuring the smart factory. In this research, prior findings were cumulated indicating that the interoperability between Internet of Things-based real-time production logistics and cyber-physical process monitoring systems can decide upon the progression of operations advancing a system to the intended state in CPPSs. We carried out a quantitative literature review of ProQuest, Scopus, and the Web of Science throughout March and August 2021, with search terms including “cyber-physical production systems”, “cyber-physical manufacturing systems”, “smart process manufacturing”, “smart industrial manufacturing processes”, “networked manufacturing systems”, “industrial cyber-physical systems,” “smart industrial production processes”, and “sustainable Internet of Things-based manufacturing systems”. As we analyzed research published between 2017 and 2021, only 489 papers met the eligibility criteria. By removing controversial or unclear findings (scanty/unimportant data), results unsupported by replication, undetailed content, or papers having quite similar titles, we decided on 164, chiefly empirical, sources. Subsequent analyses should develop on real-time sensor networks, so as to configure the importance of artificial intelligence-driven big data analytics by use of cyber-physical production networks. Full article
(This article belongs to the Special Issue Big Data and Artificial Intelligence for Industry 4.0)
Show Figures

Figure 1

15 pages, 3893 KiB  
Article
Social Distance Monitoring Approach Using Wearable Smart Tags
by Tareq Alhmiedat and Majed Aborokbah
Electronics 2021, 10(19), 2435; https://doi.org/10.3390/electronics10192435 - 8 Oct 2021
Cited by 26 | Viewed by 12512
Abstract
Coronavirus has affected millions of people worldwide, with the rate of infected people still increasing. The virus is transmitted between people through direct, indirect, or close contact with infected people. To help prevent the social transmission of COVID-19, this paper presents a new [...] Read more.
Coronavirus has affected millions of people worldwide, with the rate of infected people still increasing. The virus is transmitted between people through direct, indirect, or close contact with infected people. To help prevent the social transmission of COVID-19, this paper presents a new smart social distance system that allows individuals to keep social distances between others in indoor and outdoor environments, avoiding exposure to COVID-19 and slowing its spread locally and across the country. The proposed smart monitoring system consists of a new smart wearable prototype of a compact and low-cost electronic device, based on human detection and proximity distance functions, to estimate the social distance between people and issue a notification when the social distance is less than a predefined threshold value. The developed social system has been validated through several experiments, and achieved a high acceptance rate (96.1%) and low localization error (<6 m). Full article
Show Figures

Figure 1

19 pages, 6321 KiB  
Article
Image-Based Malware Classification Using VGG19 Network and Spatial Convolutional Attention
by Mazhar Javed Awan, Osama Ahmed Masood, Mazin Abed Mohammed, Awais Yasin, Azlan Mohd Zain, Robertas Damaševičius and Karrar Hameed Abdulkareem
Electronics 2021, 10(19), 2444; https://doi.org/10.3390/electronics10192444 - 8 Oct 2021
Cited by 119 | Viewed by 8789
Abstract
In recent years the amount of malware spreading through the internet and infecting computers and other communication devices has tremendously increased. To date, countless techniques and methodologies have been proposed to detect and neutralize these malicious agents. However, as new and automated malware [...] Read more.
In recent years the amount of malware spreading through the internet and infecting computers and other communication devices has tremendously increased. To date, countless techniques and methodologies have been proposed to detect and neutralize these malicious agents. However, as new and automated malware generation techniques emerge, a lot of malware continues to be produced, which can bypass some state-of-the-art malware detection methods. Therefore, there is a need for the classification and detection of these adversarial agents that can compromise the security of people, organizations, and countless other forms of digital assets. In this paper, we propose a spatial attention and convolutional neural network (SACNN) based on deep learning framework for image-based classification of 25 well-known malware families with and without class balancing. Performance was evaluated on the Malimg benchmark dataset using precision, recall, specificity, precision, and F1 score on which our proposed model with class balancing reached 97.42%, 97.95%, 97.33%, 97.11%, and 97.32%. We also conducted experiments on SACNN with class balancing on benign class, also produced above 97%. The results indicate that our proposed model can be used for image-based malware detection with high performance, despite being simpler as compared to other available solutions. Full article
(This article belongs to the Special Issue Security and Privacy for IoT and Multimedia Services)
Show Figures

Figure 1

20 pages, 1091 KiB  
Article
An 8-bit Radix-4 Non-Volatile Parallel Multiplier
by Chengjie Fu, Xiaolei Zhu, Kejie Huang and Zheng Gu
Electronics 2021, 10(19), 2358; https://doi.org/10.3390/electronics10192358 - 27 Sep 2021
Cited by 7 | Viewed by 4107
Abstract
The data movement between the processing and storage units has been one of the most critical issues in modern computer systems. The emerging Resistive Random Access Memory (RRAM) technology has drawn tremendous attention due to its non-volatile ability and the potential in computation [...] Read more.
The data movement between the processing and storage units has been one of the most critical issues in modern computer systems. The emerging Resistive Random Access Memory (RRAM) technology has drawn tremendous attention due to its non-volatile ability and the potential in computation application. These properties make them a perfect choice for application in modern computing systems. In this paper, an 8-bit radix-4 non-volatile parallel multiplier is proposed, with improved computational capabilities. The corresponding booth encoding scheme, read-out circuit, simplified Wallace tree, and Manchester carry chain are presented, which help to short the delay of the proposed multiplier. While the presence of RRAM save computational time and overall power as multiplicand is stored beforehand. The area of the proposed non-volatile multiplier is reduced with improved computing speed. The proposed multiplier has an area of 785.2 μm2 with Generic Processing Design Kit 45 nm process. The simulation results show that the proposed multiplier structure has a low computing power at 161.19 μW and a short delay of 0.83 ns with 1.2 V supply voltage. Comparative analyses are performed to demonstrate the effectiveness of the proposed multiplier design. Compared with conventional booth multipliers, the proposed multiplier structure reduces the energy and delay by more than 70% and 19%, respectively. Full article
(This article belongs to the Special Issue Advanced Analog Circuits for Emerging Applications)
Show Figures

Figure 1

12 pages, 1114 KiB  
Article
Cricket Match Analytics Using the Big Data Approach
by Mazhar Javed Awan, Syed Arbaz Haider Gilani, Hamza Ramzan, Haitham Nobanee, Awais Yasin, Azlan Mohd Zain and Rabia Javed
Electronics 2021, 10(19), 2350; https://doi.org/10.3390/electronics10192350 - 26 Sep 2021
Cited by 33 | Viewed by 13409
Abstract
Cricket is one of the most liked, played, encouraged, and exciting sports in today’s time that requires a proper advancement with machine learning and artificial intelligence (AI) to attain more accuracy. With the increasing number of matches with time, the data related to [...] Read more.
Cricket is one of the most liked, played, encouraged, and exciting sports in today’s time that requires a proper advancement with machine learning and artificial intelligence (AI) to attain more accuracy. With the increasing number of matches with time, the data related to cricket matches and the individual player are increasing rapidly. Moreover, the need of using big data analytics and the opportunities of utilizing this big data effectively in many beneficial ways are also increasing, such as the selection process of players in the team, predicting the winner of the match, and many more future predictions using some machine learning models or big data techniques. We applied the machine learning linear regression model to predict the team scores without big data and the big data framework Spark ML. The experimental results are measured through accuracy, the root mean square error (RMSE), mean square error (MSE), and mean absolute error (MAE), respectively 95%, 30.2, 1350.34, and 28.2 after applying linear regression in Spark ML. Furthermore, our approach can be applied to other sports. Full article
(This article belongs to the Special Issue Big Data Technologies: Explorations and Analytics)
Show Figures

Figure 1

22 pages, 26012 KiB  
Article
OATCR: Outdoor Autonomous Trash-Collecting Robot Design Using YOLOv4-Tiny
by Medhasvi Kulshreshtha, Sushma S. Chandra, Princy Randhawa, Georgios Tsaramirsis, Adil Khadidos and Alaa O. Khadidos
Electronics 2021, 10(18), 2292; https://doi.org/10.3390/electronics10182292 - 18 Sep 2021
Cited by 42 | Viewed by 20216
Abstract
This paper proposed an innovative mechanical design using the Rocker-bogie mechanism for resilient Trash-Collecting Robots. Mask-RCNN, YOLOV4, and YOLOv4-tiny were experimented on and analyzed for trash detection. The Trash-Collecting Robot was developed to be completely autonomous as it was able to detect trash, [...] Read more.
This paper proposed an innovative mechanical design using the Rocker-bogie mechanism for resilient Trash-Collecting Robots. Mask-RCNN, YOLOV4, and YOLOv4-tiny were experimented on and analyzed for trash detection. The Trash-Collecting Robot was developed to be completely autonomous as it was able to detect trash, move towards it, and pick it up while avoiding any obstacles along the way. Sensors including a camera, ultrasonic sensor, and GPS module played an imperative role in automation. The brain of the Robot, namely, Raspberry Pi and Arduino, processed the data from the sensors and performed path-planning and consequent motion of the robot through actuation of motors. Three models for object detection were tested for potential use in the robot: Mask-RCNN, YOLOv4, and YOLOv4-tiny. Mask-RCNN achieved an average precision (mAP) of over 83% and detection time (DT) of 3973.29 ms, YOLOv4 achieved 97.1% (mAP) and 32.76 DT, and YOLOv4-tiny achieved 95.2% and 5.21 ms DT. The YOLOv4-tiny was selected as it offered a very similar mAP to YOLOv4, but with a much lower DT. The design was simulated on different terrains and behaved as expected. Full article
(This article belongs to the Section Systems & Control Engineering)
Show Figures

Figure 1

15 pages, 3417 KiB  
Article
Mitigating Broadcasting Storm Using Multihead Nomination Clustering in Vehicular Content Centric Networks
by Ayesha Siddiqa, Muhammad Diyan, Muhammad Toaha Raza Khan, Malik Muhammad Saad and Dongkyun Kim
Electronics 2021, 10(18), 2270; https://doi.org/10.3390/electronics10182270 - 15 Sep 2021
Cited by 5 | Viewed by 2676
Abstract
Vehicles are highly mobile nodes; therefore, they frequently change their topology. To maintain a stable connection with the server in high-speed vehicular networks, the handover process is restarted again to satisfy the content requests. To satisfy the requested content, a vehicular-content-centric network (VCCN) [...] Read more.
Vehicles are highly mobile nodes; therefore, they frequently change their topology. To maintain a stable connection with the server in high-speed vehicular networks, the handover process is restarted again to satisfy the content requests. To satisfy the requested content, a vehicular-content-centric network (VCCN) is proposed. The proposed scheme adopts in-network caching instead of destination-based routing to satisfy the requests. In this regard, various routing protocols have been proposed to increase the communication efficiency of VCCN. Despite disruptive communication links due to head vehicle mobility, the vehicles create a broadcasting storm that increases communication delay and packet drop fraction. To address the issues mentioned above in the VCCN, we proposed a multihead nomination clustering scheme. It extends the hello packet header to get the vehicle information from the cluster vehicles. The novel cluster information table (CIT) has been proposed to maintain several nominated head vehicles of a cluster on roadside units (RSUs). In disruptive communication links due to the head vehicle’s mobility, the RSU nominates the new head vehicle using CIT entries, resulting in the elimination of the broadcasting storm effect on disruptive communication links. Finally, the proposed scheme increases the successful communication rate, decreases the communication delay, and ensures a high cache success ratio on an increasing number of vehicles. Full article
(This article belongs to the Special Issue 10th Anniversary of Electronics: Advances in Networks)
Show Figures

Figure 1

19 pages, 3274 KiB  
Article
Advancing Logistics 4.0 with the Implementation of a Big Data Warehouse: A Demonstration Case for the Automotive Industry
by Nuno Silva, Júlio Barros, Maribel Y. Santos, Carlos Costa, Paulo Cortez, M. Sameiro Carvalho and João N. C. Gonçalves
Electronics 2021, 10(18), 2221; https://doi.org/10.3390/electronics10182221 - 10 Sep 2021
Cited by 28 | Viewed by 9646
Abstract
The constant advancements in Information Technology have been the main driver of the Big Data concept’s success. With it, new concepts such as Industry 4.0 and Logistics 4.0 are arising. Due to the increase in data volume, velocity, and variety, organizations are now [...] Read more.
The constant advancements in Information Technology have been the main driver of the Big Data concept’s success. With it, new concepts such as Industry 4.0 and Logistics 4.0 are arising. Due to the increase in data volume, velocity, and variety, organizations are now looking to their data analytics infrastructures and searching for approaches to improve their decision-making capabilities, in order to enhance their results using new approaches such as Big Data and Machine Learning. The implementation of a Big Data Warehouse can be the first step to improve the organizations’ data analysis infrastructure and start retrieving value from the usage of Big Data technologies. Moving to Big Data technologies can provide several opportunities for organizations, such as the capability of analyzing an enormous quantity of data from different data sources in an efficient way. However, at the same time, different challenges can arise, including data quality, data management, and lack of knowledge within the organization, among others. In this work, we propose an approach that can be adopted in the logistics department of any organization in order to promote the Logistics 4.0 movement, while highlighting the main challenges and opportunities associated with the development and implementation of a Big Data Warehouse in a real demonstration case at a multinational automotive organization. Full article
(This article belongs to the Special Issue Big Data and Artificial Intelligence for Industry 4.0)
Show Figures

Figure 1

19 pages, 1956 KiB  
Article
A Comprehensive Analysis of Deep Neural-Based Cerebral Microbleeds Detection System
by Maria Anna Ferlin, Michał Grochowski, Arkadiusz Kwasigroch, Agnieszka Mikołajczyk, Edyta Szurowska, Małgorzata Grzywińska and Agnieszka Sabisz
Electronics 2021, 10(18), 2208; https://doi.org/10.3390/electronics10182208 - 9 Sep 2021
Cited by 14 | Viewed by 3229
Abstract
Machine learning-based systems are gaining interest in the field of medicine, mostly in medical imaging and diagnosis. In this paper, we address the problem of automatic cerebral microbleeds (CMB) detection in magnetic resonance images. It is challenging due to difficulty in distinguishing a [...] Read more.
Machine learning-based systems are gaining interest in the field of medicine, mostly in medical imaging and diagnosis. In this paper, we address the problem of automatic cerebral microbleeds (CMB) detection in magnetic resonance images. It is challenging due to difficulty in distinguishing a true CMB from its mimics, however, if successfully solved, it would streamline the radiologists work. To deal with this complex three-dimensional problem, we propose a machine learning approach based on a 2D Faster RCNN network. We aimed to achieve a reliable system, i.e., with balanced sensitivity and precision. Therefore, we have researched and analysed, among others, impact of the way the training data are provided to the system, their pre-processing, the choice of model and its structure, and also the ways of regularisation. Furthermore, we also carefully analysed the network predictions and proposed an algorithm for its post-processing. The proposed approach enabled for obtaining high precision (89.74%), sensitivity (92.62%), and F1 score (90.84%). The paper presents the main challenges connected with automatic cerebral microbleeds detection, its deep analysis and developed system. The conducted research may significantly contribute to automatic medical diagnosis. Full article
(This article belongs to the Special Issue Machine Learning in Electronic and Biomedical Engineering)
Show Figures

Figure 1

36 pages, 32069 KiB  
Review
A Survey of the Tactile Internet: Design Issues and Challenges, Applications, and Future Directions
by Vaibhav Fanibhare, Nurul I. Sarkar and Adnan Al-Anbuky
Electronics 2021, 10(17), 2171; https://doi.org/10.3390/electronics10172171 - 6 Sep 2021
Cited by 35 | Viewed by 10412
Abstract
The Tactile Internet (TI) is an emerging area of research involving 5G and beyond (B5G) communications to enable real-time interaction of haptic data over the Internet between tactile ends, with audio-visual data as feedback. This emerging TI technology is viewed as the next [...] Read more.
The Tactile Internet (TI) is an emerging area of research involving 5G and beyond (B5G) communications to enable real-time interaction of haptic data over the Internet between tactile ends, with audio-visual data as feedback. This emerging TI technology is viewed as the next evolutionary step for the Internet of Things (IoT) and is expected to bring about a massive change in Healthcare 4.0, Industry 4.0 and autonomous vehicles to resolve complicated issues in modern society. This vision of TI makes a dream into a reality. This article aims to provide a comprehensive survey of TI, focussing on design architecture, key application areas, potential enabling technologies, current issues, and challenges to realise it. To illustrate the novelty of our work, we present a brainstorming mind-map of all the topics discussed in this article. We emphasise the design aspects of the TI and discuss the three main sections of the TI, i.e., master, network, and slave sections, with a focus on the proposed application-centric design architecture. With the help of the proposed illustrative diagrams of use cases, we discuss and tabulate the possible applications of the TI with a 5G framework and its requirements. Then, we extensively address the currently identified issues and challenges with promising potential enablers of the TI. Moreover, a comprehensive review focussing on related articles on enabling technologies is explored, including Fifth Generation (5G), Software-Defined Networking (SDN), Network Function Virtualisation (NFV), Cloud/Edge/Fog Computing, Multiple Access, and Network Coding. Finally, we conclude the survey with several research issues that are open for further investigation. Thus, the survey provides insights into the TI that can help network researchers and engineers to contribute further towards developing the next-generation Internet. Full article
Show Figures

Figure 1

14 pages, 34742 KiB  
Article
Implementation of an Award-Winning Invasive Fish Recognition and Separation System
by Jin Chai, Dah-Jye Lee, Beau Tippetts and Kirt Lillywhite
Electronics 2021, 10(17), 2182; https://doi.org/10.3390/electronics10172182 - 6 Sep 2021
Viewed by 2560
Abstract
The state of Michigan, U.S.A., was awarded USD 1 million in March 2018 for the Great Lakes Invasive Carp Challenge. The challenge sought new and novel technologies to function independently of or in conjunction with those fish deterrents already in place to prevent [...] Read more.
The state of Michigan, U.S.A., was awarded USD 1 million in March 2018 for the Great Lakes Invasive Carp Challenge. The challenge sought new and novel technologies to function independently of or in conjunction with those fish deterrents already in place to prevent the movement of invasive carp species into the Great Lakes from the Illinois River through the Chicago Area Waterway System (CAWS). Our team proposed an environmentally friendly, low-cost, vision-based fish recognition and separation system. The proposed solution won fourth place in the challenge out of 353 participants from 27 countries. The proposed solution includes an underwater imaging system that captures the fish images for processing, fish species recognition algorithm that identify invasive carp species, and a mechanical system that guides the fish movement and restrains invasive fish species for removal. We used our evolutionary learning-based algorithm to recognize fish species, which is considered the most challenging task of this solution. The algorithm was tested with a fish dataset consisted of four invasive and four non-invasive fish species. It achieved a remarkable 1.58% error rate, which is more than adequate for the proposed system, and required only a small number of images for training. This paper details the design of this unique solution and the implementation and testing that were accomplished since the challenge. Full article
(This article belongs to the Special Issue Convolutional Neural Networks and Vision Applications)
Show Figures

Figure 1

42 pages, 2364 KiB  
Article
Exploiting the Outcome of Outlier Detection for Novel Attack Pattern Recognition on Streaming Data
by Michael Heigl, Enrico Weigelt, Andreas Urmann, Dalibor Fiala and Martin Schramm
Electronics 2021, 10(17), 2160; https://doi.org/10.3390/electronics10172160 - 4 Sep 2021
Cited by 7 | Viewed by 4138
Abstract
Future-oriented networking infrastructures are characterized by highly dynamic Streaming Data (SD) whose volume, speed and number of dimensions increased significantly over the past couple of years, energized by trends such as Software-Defined Networking or Artificial Intelligence. As an essential core component of network [...] Read more.
Future-oriented networking infrastructures are characterized by highly dynamic Streaming Data (SD) whose volume, speed and number of dimensions increased significantly over the past couple of years, energized by trends such as Software-Defined Networking or Artificial Intelligence. As an essential core component of network security, Intrusion Detection Systems (IDS) help to uncover malicious activity. In particular, consecutively applied alert correlation methods can aid in mining attack patterns based on the alerts generated by IDS. However, most of the existing methods lack the functionality to deal with SD data affected by the phenomenon called concept drift and are mainly designed to operate on the output from signature-based IDS. Although unsupervised Outlier Detection (OD) methods have the ability to detect yet unknown attacks, most of the alert correlation methods cannot handle the outcome of such anomaly-based IDS. In this paper, we introduce a novel framework called Streaming Outlier Analysis and Attack Pattern Recognition, denoted as SOAAPR, which is able to process the output of various online unsupervised OD methods in a streaming fashion to extract information about novel attack patterns. Three different privacy-preserving, fingerprint-like signatures are computed from the clustered set of correlated alerts by SOAAPR, which characterizes and represents the potential attack scenarios with respect to their communication relations, their manifestation in the data’s features and their temporal behavior. Beyond the recognition of known attacks, comparing derived signatures, they can be leveraged to find similarities between yet unknown and novel attack patterns. The evaluation, which is split into two parts, takes advantage of attack scenarios from the widely-used and popular CICIDS2017 and CSE-CIC-IDS2018 datasets. Firstly, the streaming alert correlation capability is evaluated on CICIDS2017 and compared to a state-of-the-art offline algorithm, called Graph-based Alert Correlation (GAC), which has the potential to deal with the outcome of anomaly-based IDS. Secondly, the three types of signatures are computed from attack scenarios in the datasets and compared to each other. The discussion of results, on the one hand, shows that SOAAPR can compete with GAC in terms of alert correlation capability leveraging four different metrics and outperforms it significantly in terms of processing time by an average factor of 70 in 11 attack scenarios. On the other hand, in most cases, all three types of signatures seem to reliably characterize attack scenarios such that similar ones are grouped together, with up to 99.05% similarity between the FTP and SSH Patator attack. Full article
(This article belongs to the Special Issue Data Security)
Show Figures

Figure 1

16 pages, 1210 KiB  
Article
Performance of Micro-Scale Transmission & Reception Diversity Schemes in High Throughput Satellite Communication Networks
by Apostolos Z. Papafragkakis, Charilaos I. Kouroriorgas and Athanasios D. Panagopoulos
Electronics 2021, 10(17), 2073; https://doi.org/10.3390/electronics10172073 - 27 Aug 2021
Cited by 5 | Viewed by 2426
Abstract
The use of Ka and Q/V bands could be a promising solution in order to accommodate higher data rate, interactive services; however, at these frequency bands signal attenuation due to the various atmospheric phenomena and more particularly due to rain could constitute a [...] Read more.
The use of Ka and Q/V bands could be a promising solution in order to accommodate higher data rate, interactive services; however, at these frequency bands signal attenuation due to the various atmospheric phenomena and more particularly due to rain could constitute a serious limiting factor in system performance and availability. To alleviate this possible barrier, short- and large-scale diversity schemes have been proposed and examined in the past; in this paper a micro-scale site diversity system is evaluated in terms of capacity gain using rain attenuation time series generated using the Synthetic Storm Technique (SST). Input to the SST was 4 years of experimental rainfall data from two stations with a separation distance of 386 m at the National Technical University of Athens (NTUA) campus in Athens, Greece. Additionally, a novel multi-dimensional synthesizer based on Gaussian Copulas parameterized for the case of multiple-site micro-scale diversity systems is presented and evaluated. In all examined scenarios a significant capacity gain can be observed, thus proving that micro-scale site diversity systems could be a viable choice for enterprise users to increase the achievable data rates and improve the availability of their links. Full article
(This article belongs to the Special Issue State-of-the-Art in Satellite Communication Networks)
Show Figures

Figure 1

18 pages, 3914 KiB  
Article
A CEEMDAN-Assisted Deep Learning Model for the RUL Estimation of Solenoid Pumps
by Ugochukwu Ejike Akpudo and Jang-Wook Hur
Electronics 2021, 10(17), 2054; https://doi.org/10.3390/electronics10172054 - 25 Aug 2021
Cited by 12 | Viewed by 3087
Abstract
This paper develops a data-driven remaining useful life prediction model for solenoid pumps. The model extracts high-level features using stacked autoencoders from decomposed pressure signals (using complementary ensemble empirical mode decomposition with adaptive noise (CEEMDAN) algorithm). These high-level features are then received by [...] Read more.
This paper develops a data-driven remaining useful life prediction model for solenoid pumps. The model extracts high-level features using stacked autoencoders from decomposed pressure signals (using complementary ensemble empirical mode decomposition with adaptive noise (CEEMDAN) algorithm). These high-level features are then received by a recurrent neural network-gated recurrent units (GRUs) for the RUL estimation. The case study presented demonstrates the robustness of the proposed RUL estimation model with extensive empirical validations. Results support the validity of using the CEEMDAN for non-stationary signal decomposition and the accuracy, ease-of-use, and superiority of the proposed DL-based model for solenoid pump failure prognostics. Full article
Show Figures

Figure 1

20 pages, 46722 KiB  
Article
Design and Optimization of Compact Printed Log-Periodic Dipole Array Antennas with Extended Low-Frequency Response
by Keyur K. Mistry, Pavlos I. Lazaridis, Zaharias D. Zaharis and Tian Hong Loh
Electronics 2021, 10(17), 2044; https://doi.org/10.3390/electronics10172044 - 24 Aug 2021
Cited by 19 | Viewed by 13696
Abstract
This paper initially presents an overview of different miniaturization techniques used for size reduction of printed log-periodic dipole array (PLPDA) antennas, and then continues by presenting a design of a conventional PLPDA design that operates from 0.7–8 GHz and achieves a realized gain [...] Read more.
This paper initially presents an overview of different miniaturization techniques used for size reduction of printed log-periodic dipole array (PLPDA) antennas, and then continues by presenting a design of a conventional PLPDA design that operates from 0.7–8 GHz and achieves a realized gain of around 5.5 dBi in most of its bandwidth. This antenna design is then used as a baseline model to implement a novel technique to extend the low-frequency response. This is completed by replacing the longest straight dipole with a triangular-shaped dipole and by optimizing the four longest dipoles of the antenna using the Trust Region Framework algorithm in CST. The improved antenna with extended low-frequency response operates from 0.4 GHz to 8 GHz with a slightly reduced gain at the lower frequencies. Full article
(This article belongs to the Special Issue Evolutionary Antenna Optimization)
Show Figures

Figure 1

18 pages, 737 KiB  
Article
Miller Plateau Corrected with Displacement Currents and Its Use in Analyzing the Switching Process and Switching Loss
by Sheng Liu, Shuang Song, Ning Xie, Hai Chen, Xiaobo Wu and Menglian Zhao
Electronics 2021, 10(16), 2013; https://doi.org/10.3390/electronics10162013 - 20 Aug 2021
Cited by 6 | Viewed by 6516
Abstract
This paper reveals the relationship between the Miller plateau voltage and the displacement currents through the gate–drain capacitance (CGD) and the drain–source capacitance (CDS) in the switching process of a power transistor. The corrected turn-on Miller plateau [...] Read more.
This paper reveals the relationship between the Miller plateau voltage and the displacement currents through the gate–drain capacitance (CGD) and the drain–source capacitance (CDS) in the switching process of a power transistor. The corrected turn-on Miller plateau voltage and turn-off Miller plateau voltage are different even with a constant current load. Using the proposed new Miller plateau, the turn-on and turn-off sequences can be more accurately analyzed, and the switching power loss can be more accurately predicted accordingly. Switching loss models based on the new Miller plateau have also been proposed. The experimental test result of the power MOSFET (NCE2030K) verified the relationship between the Miller plateau voltage and the displacement currents through CGD and CDS. A carefully designed verification test bench featuring a power MOSFET written in Verilog-A proved the prediction accuracy of the switching waveform and switching loss with the new proposed Miller plateau. The average relative error of the loss model using the new plateau is reduced to 1/2∼1/4 of the average relative error of the loss model using the old plateau; the proposed loss model using the new plateau, which also takes the gate current’s variation into account, further reduces the error to around 5%. Full article
(This article belongs to the Special Issue Advanced Analog Circuits for Emerging Applications)
Show Figures

Figure 1

20 pages, 3754 KiB  
Review
A Review on 5G Sub-6 GHz Base Station Antenna Design Challenges
by Madiha Farasat, Dushmantha N. Thalakotuna, Zhonghao Hu and Yang Yang
Electronics 2021, 10(16), 2000; https://doi.org/10.3390/electronics10162000 - 19 Aug 2021
Cited by 53 | Viewed by 17054
Abstract
Modern wireless networks such as 5G require multiband MIMO-supported Base Station Antennas. As a result, antennas have multiple ports to support a range of frequency bands leading to multiple arrays within one compact antenna enclosure. The close proximity of the arrays results in [...] Read more.
Modern wireless networks such as 5G require multiband MIMO-supported Base Station Antennas. As a result, antennas have multiple ports to support a range of frequency bands leading to multiple arrays within one compact antenna enclosure. The close proximity of the arrays results in significant scattering degrading pattern performance of each band while coupling between arrays leads to degradation in return loss and port-to-port isolations. Different design techniques are adopted in the literature to overcome such challenges. This paper provides a classification of challenges in BSA design and a cohesive list of design techniques adopted in the literature to overcome such challenges. Full article
(This article belongs to the Special Issue Antenna Designs for 5G/IoT and Space Applications)
Show Figures

Graphical abstract

18 pages, 985 KiB  
Article
Congestion Prediction in FPGA Using Regression Based Learning Methods
by Pingakshya Goswami and Dinesh Bhatia
Electronics 2021, 10(16), 1995; https://doi.org/10.3390/electronics10161995 - 18 Aug 2021
Cited by 12 | Viewed by 3958
Abstract
Design closure in general VLSI physical design flows and FPGA physical design flows is an important and time-consuming problem. Routing itself can consume as much as 70% of the total design time. Accurate congestion estimation during the early stages of the design flow [...] Read more.
Design closure in general VLSI physical design flows and FPGA physical design flows is an important and time-consuming problem. Routing itself can consume as much as 70% of the total design time. Accurate congestion estimation during the early stages of the design flow can help alleviate last-minute routing-related surprises. This paper has described a methodology for a post-placement, machine learning-based routing congestion prediction model for FPGAs. Routing congestion is modeled as a regression problem. We have described the methods for generating training data, feature extractions, training, regression models, validation, and deployment approaches. We have tested our prediction model by using ISPD 2016 FPGA benchmarks. Our prediction method reports a very accurate localized congestion value in each channel around a configurable logic block (CLB). The localized congestion is predicted in both vertical and horizontal directions. We demonstrate the effectiveness of our model on completely unseen designs that are not initially part of the training data set. The generated results show significant improvement in terms of accuracy measured as mean absolute error and prediction time when compared against the latest state-of-the-art works. Full article
(This article belongs to the Special Issue Advanced AI Hardware Designs Based on FPGAs)
Show Figures

Figure 1

14 pages, 5235 KiB  
Article
Design and Preliminary Experiment of W-Band Broadband TE02 Mode Gyro-TWT
by Xu Zeng, Chaohai Du, An Li, Shang Gao, Zheyuan Wang, Yichi Zhang, Zhangxiong Zi and Jinjun Feng
Electronics 2021, 10(16), 1950; https://doi.org/10.3390/electronics10161950 - 13 Aug 2021
Cited by 19 | Viewed by 3007
Abstract
The gyrotron travelling wave tube (gyro-TWT) is an ideal high-power, broadband vacuum electron amplifier in millimeter and sub-millimeter wave bands. It can be applied as the source of the imaging radar to improve the resolution and operating range. To satisfy the requirements of [...] Read more.
The gyrotron travelling wave tube (gyro-TWT) is an ideal high-power, broadband vacuum electron amplifier in millimeter and sub-millimeter wave bands. It can be applied as the source of the imaging radar to improve the resolution and operating range. To satisfy the requirements of the W-band high-resolution imaging radar, the design and the experimentation of the W-band broadband TE02 mode gyro-TWT were carried out. In this paper, the designs of the key components of the vacuum tube are introduced, including the interaction area, electron optical system, and transmission system. The experimental results show that when the duty ratio is 1%, the output power is above 60 kW with a bandwidth of 8 GHz, and the saturated gain is above 32 dB. In addition, parasitic mode oscillations were observed in the experiment, which limited the increase in duty ratio and caused the measured gains to be much lower than the simulation results. For this phenomenon, the reasons and the suppression methods are under study. Full article
Show Figures

Graphical abstract

21 pages, 6785 KiB  
Review
Review of Electric Vehicle Technologies, Charging Methods, Standards and Optimization Techniques
by Syed Muhammad Arif, Tek Tjing Lie, Boon Chong Seet, Soumia Ayyadi and Kristian Jensen
Electronics 2021, 10(16), 1910; https://doi.org/10.3390/electronics10161910 - 9 Aug 2021
Cited by 154 | Viewed by 22047
Abstract
This paper presents a state-of-the-art review of electric vehicle technology, charging methods, standards, and optimization techniques. The essential characteristics of Hybrid Electric Vehicle (HEV) and Electric Vehicle (EV) are first discussed. Recent research on EV charging methods such as Battery Swap Station (BSS), [...] Read more.
This paper presents a state-of-the-art review of electric vehicle technology, charging methods, standards, and optimization techniques. The essential characteristics of Hybrid Electric Vehicle (HEV) and Electric Vehicle (EV) are first discussed. Recent research on EV charging methods such as Battery Swap Station (BSS), Wireless Power Transfer (WPT), and Conductive Charging (CC) are then presented. This is followed by a discussion of EV standards such as charging levels and their configurations. Next, some of the most used optimization techniques for the sizing and placement of EV charging stations are analyzed. Finally, based on the insights gained, several recommendations are put forward for future research. Full article
Show Figures

Figure 1

18 pages, 1222 KiB  
Article
Determination of Traffic Characteristics of Elastic Optical Networks Nodes with Reservation Mechanisms
by Maciej Sobieraj, Piotr Zwierzykowski and Erich Leitgeb
Electronics 2021, 10(15), 1853; https://doi.org/10.3390/electronics10151853 - 1 Aug 2021
Cited by 11 | Viewed by 2922
Abstract
With the ever-increasing demand for bandwidth, appropriate mechanisms that would provide reliable and optimum service level to designated or specified traffic classes during heavy traffic loads in networks are becoming particularly sought after. One of these mechanisms is the resource reservation mechanism, in [...] Read more.
With the ever-increasing demand for bandwidth, appropriate mechanisms that would provide reliable and optimum service level to designated or specified traffic classes during heavy traffic loads in networks are becoming particularly sought after. One of these mechanisms is the resource reservation mechanism, in which parts of the resources are available only to selected (pre-defined) services. While considering modern elastic optical networks (EONs) where advanced data transmission techniques are used, an attempt was made to develop a simulation program that would make it possible to determine the traffic characteristics of the nodes in EONs. This article discusses a simulation program that has the advantage of providing the possibility to determine the loss probability for individual service classes in the nodes of an EON where the resource reservation mechanism has been introduced. The initial assumption in the article is that a Clos optical switching network is used to construct the EON nodes. The results obtained with the simulator developed by the authors will allow the influence of the introduced reservation mechanism on the loss probability of calls of individual traffic classes that are offered to the system under consideration to be determined. Full article
(This article belongs to the Special Issue 10th Anniversary of Electronics: Advances in Networks)
Show Figures

Figure 1

17 pages, 6667 KiB  
Article
Analysis of Obstacle Avoidance Strategy for Dual-Arm Robot Based on Speed Field with Improved Artificial Potential Field Algorithm
by Hui Zhang, Yongfei Zhu, Xuefei Liu and Xiangrong Xu
Electronics 2021, 10(15), 1850; https://doi.org/10.3390/electronics10151850 - 31 Jul 2021
Cited by 40 | Viewed by 5461
Abstract
In recent years, dual-arm robots have been favored in various industries due to their excellent coordinated operability. One of the focused areas of study on dual-arm robots is obstacle avoidance, namely path planning. Among the existing path planning methods, the artificial potential field [...] Read more.
In recent years, dual-arm robots have been favored in various industries due to their excellent coordinated operability. One of the focused areas of study on dual-arm robots is obstacle avoidance, namely path planning. Among the existing path planning methods, the artificial potential field (APF) algorithm is widely applied in obstacle avoidance for its simplicity, practicability, and good real-time performance over other planning methods. However, APF is firstly proposed to solve the obstacle avoidance problem of mobile robot in plane, and thus has some limitations such as being prone to fall into local minimum, not being applicable when dynamic obstacles are encountered. Therefore, an obstacle avoidance strategy for a dual-arm robot based on speed field with improved artificial potential field algorithm is proposed. In our method, the APF algorithm is used to establish the attraction and repulsion functions of the robotic manipulator, and then the concepts of attraction and repulsion speed are introduced. The attraction and repulsion functions are converted into the attraction and repulsion speed functions, which mapped to the joint space. By using the Jacobian matrix and its inverse to establish the differential velocity function of joint motion, as well as comparing it with the set collision distance threshold between two robotic manipulators of robot, the collision avoidance can be solved. Meanwhile, after introducing a new repulsion function and adding virtual constraint points to eliminate existing limitations, APF is also improved. The correctness and effectiveness of the proposed method in the self-collision avoidance problem of a dual-arm robot are validated in MATLAB and Adams simulation environment. Full article
(This article belongs to the Section Systems & Control Engineering)
Show Figures

Figure 1

20 pages, 813 KiB  
Article
Improving Semi-Supervised Learning for Audio Classification with FixMatch
by Sascha Grollmisch and Estefanía Cano
Electronics 2021, 10(15), 1807; https://doi.org/10.3390/electronics10151807 - 28 Jul 2021
Cited by 22 | Viewed by 6320
Abstract
Including unlabeled data in the training process of neural networks using Semi-Supervised Learning (SSL) has shown impressive results in the image domain, where state-of-the-art results were obtained with only a fraction of the labeled data. The commonality between recent SSL methods is that [...] Read more.
Including unlabeled data in the training process of neural networks using Semi-Supervised Learning (SSL) has shown impressive results in the image domain, where state-of-the-art results were obtained with only a fraction of the labeled data. The commonality between recent SSL methods is that they strongly rely on the augmentation of unannotated data. This is vastly unexplored for audio data. In this work, SSL using the state-of-the-art FixMatch approach is evaluated on three audio classification tasks, including music, industrial sounds, and acoustic scenes. The performance of FixMatch is compared to Convolutional Neural Networks (CNN) trained from scratch, Transfer Learning, and SSL using the Mean Teacher approach. Additionally, a simple yet effective approach for selecting suitable augmentation methods for FixMatch is introduced. FixMatch with the proposed modifications always outperformed Mean Teacher and the CNNs trained from scratch. For the industrial sounds and music datasets, the CNN baseline performance using the full dataset was reached with less than 5% of the initial training data, demonstrating the potential of recent SSL methods for audio data. Transfer Learning outperformed FixMatch only for the most challenging dataset from acoustic scene classification, showing that there is still room for improvement. Full article
(This article belongs to the Special Issue Machine Learning Applied to Music/Audio Signal Processing)
Show Figures

Figure 1

11 pages, 951 KiB  
Article
Ultralow Voltage FinFET- Versus TFET-Based STT-MRAM Cells for IoT Applications
by Esteban Garzón, Marco Lanuzza, Ramiro Taco and Sebastiano Strangio
Electronics 2021, 10(15), 1756; https://doi.org/10.3390/electronics10151756 - 22 Jul 2021
Cited by 15 | Viewed by 4015
Abstract
Spin-transfer torque magnetic tunnel junction (STT-MTJ) based on double-barrier magnetic tunnel junction (DMTJ) has shown promising characteristics to define low-power non-volatile memories. This, along with the combination of tunnel FET (TFET) technology, could enable the design of ultralow-power/ultralow-energy STT magnetic RAMs (STT-MRAMs) for [...] Read more.
Spin-transfer torque magnetic tunnel junction (STT-MTJ) based on double-barrier magnetic tunnel junction (DMTJ) has shown promising characteristics to define low-power non-volatile memories. This, along with the combination of tunnel FET (TFET) technology, could enable the design of ultralow-power/ultralow-energy STT magnetic RAMs (STT-MRAMs) for future Internet of Things (IoT) applications. This paper presents the comparison between FinFET- and TFET-based STT-MRAM bitcells operating at ultralow voltages. Our study is performed at the bitcell level by considering a DMTJ with two reference layers and exploiting either FinFET or TFET devices as cell selectors. Although ultralow-voltage operation occurs at the expense of reduced reading voltage sensing margins, simulations results show that TFET-based solutions are more resilient to process variations and can operate at ultralow voltages (<0.5 V), while showing energy savings of 50% and faster write switching of 60%. Full article
Show Figures

Figure 1

18 pages, 1840 KiB  
Article
Recurrent Neural Network for Human Activity Recognition in Embedded Systems Using PPG and Accelerometer Data
by Michele Alessandrini, Giorgio Biagetti, Paolo Crippa, Laura Falaschetti and Claudio Turchetti
Electronics 2021, 10(14), 1715; https://doi.org/10.3390/electronics10141715 - 17 Jul 2021
Cited by 55 | Viewed by 5508
Abstract
Photoplethysmography (PPG) is a common and practical technique to detect human activity and other physiological parameters and is commonly implemented in wearable devices. However, the PPG signal is often severely corrupted by motion artifacts. The aim of this paper is to address the [...] Read more.
Photoplethysmography (PPG) is a common and practical technique to detect human activity and other physiological parameters and is commonly implemented in wearable devices. However, the PPG signal is often severely corrupted by motion artifacts. The aim of this paper is to address the human activity recognition (HAR) task directly on the device, implementing a recurrent neural network (RNN) in a low cost, low power microcontroller, ensuring the required performance in terms of accuracy and low complexity. To reach this goal, (i) we first develop an RNN, which integrates PPG and tri-axial accelerometer data, where these data can be used to compensate motion artifacts in PPG in order to accurately detect human activity; (ii) then, we port the RNN to an embedded device, Cloud-JAM L4, based on an STM32 microcontroller, optimizing it to maintain an accuracy of over 95% while requiring modest computational power and memory resources. The experimental results show that such a system can be effectively implemented on a constrained-resource system, allowing the design of a fully autonomous wearable embedded system for human activity recognition and logging. Full article
(This article belongs to the Section Artificial Intelligence Circuits and Systems (AICAS))
Show Figures

Figure 1

29 pages, 1934 KiB  
Review
Massive MIMO Techniques for 5G and Beyond—Opportunities and Challenges
by David Borges, Paulo Montezuma, Rui Dinis and Marko Beko
Electronics 2021, 10(14), 1667; https://doi.org/10.3390/electronics10141667 - 13 Jul 2021
Cited by 59 | Viewed by 15588
Abstract
Telecommunications have grown to be a pillar to a functional society and the urge for reliable and high throughput systems has become the main objective of researchers and engineers. State-of-the-art work considers massive Multiple-Input Multiple-Output (massive MIMO) as the key technology for 5G [...] Read more.
Telecommunications have grown to be a pillar to a functional society and the urge for reliable and high throughput systems has become the main objective of researchers and engineers. State-of-the-art work considers massive Multiple-Input Multiple-Output (massive MIMO) as the key technology for 5G and beyond. Large spatial multiplexing and diversity gains are some of the major benefits together with an improved energy efficiency. Current works mostly assume the application of well-established techniques in a massive MIMO scenario, although there are still open challenges regarding hardware and computational complexities and energy efficiency. Fully digital, analog, and hybrid structures are analyzed and a multi-layer massive MIMO transmission technique is detailed. The purpose of this article is to describe the most acknowledged transmission techniques for massive MIMO systems and to analyze some of the most promising ones and identify existing problems and limitations. Full article
Show Figures

Figure 1

26 pages, 5653 KiB  
Article
Deep Learning Techniques for the Classification of Colorectal Cancer Tissue
by Min-Jen Tsai and Yu-Han Tao
Electronics 2021, 10(14), 1662; https://doi.org/10.3390/electronics10141662 - 12 Jul 2021
Cited by 61 | Viewed by 6463
Abstract
It is very important to make an objective evaluation of colorectal cancer histological images. Current approaches are generally based on the use of different combinations of textual features and classifiers to assess the classification performance, or transfer learning to classify different organizational types. [...] Read more.
It is very important to make an objective evaluation of colorectal cancer histological images. Current approaches are generally based on the use of different combinations of textual features and classifiers to assess the classification performance, or transfer learning to classify different organizational types. However, since histological images contain multiple tissue types and characteristics, classification is still challenging. In this study, we proposed the best classification methodology based on the selected optimizer and modified the parameters of CNN methods. Then, we used deep learning technology to distinguish between healthy and diseased large intestine tissues. Firstly, we trained a neural network and compared the network architecture optimizers. Secondly, we modified the parameters of the network layer to optimize the superior architecture. Finally, we compared our well-trained deep learning methods on two different histological image open datasets, which comprised 5000 H&E images of colorectal cancer. The other dataset was composed of nine organizational categories of 100,000 images with an external validation of 7180 images. The results showed that the accuracy of the recognition of histopathological images was significantly better than that of existing methods. Therefore, this method is expected to have great potential to assist physicians to make clinical diagnoses and reduce the number of disparate assessments based on the use of artificial intelligence to classify colorectal cancer tissue. Full article
Show Figures

Figure 1

28 pages, 1136 KiB  
Review
Survey of Millimeter-Wave Propagation Measurements and Models in Indoor Environments
by Ahmed Al-Saman, Michael Cheffena, Olakunle Elijah, Yousef A. Al-Gumaei, Sharul Kamal Abdul Rahim and Tawfik Al-Hadhrami
Electronics 2021, 10(14), 1653; https://doi.org/10.3390/electronics10141653 - 11 Jul 2021
Cited by 47 | Viewed by 7045
Abstract
The millimeter-wave (mmWave) is expected to deliver a huge bandwidth to address the future demands for higher data rate transmissions. However, one of the major challenges in the mmWave band is the increase in signal loss as the operating frequency increases. This has [...] Read more.
The millimeter-wave (mmWave) is expected to deliver a huge bandwidth to address the future demands for higher data rate transmissions. However, one of the major challenges in the mmWave band is the increase in signal loss as the operating frequency increases. This has attracted several research interests both from academia and the industry for indoor and outdoor mmWave operations. This paper focuses on the works that have been carried out in the study of the mmWave channel measurement in indoor environments. A survey of the measurement techniques, prominent path loss models, analysis of path loss and delay spread for mmWave in different indoor environments is presented. This covers the mmWave frequencies from 28 GHz to 100 GHz that have been considered in the last two decades. In addition, the possible future trends for the mmWave indoor propagation studies and measurements have been discussed. These include the critical indoor environment, the roles of artificial intelligence, channel characterization for indoor devices, reconfigurable intelligent surfaces, and mmWave for 6G systems. This survey can help engineers and researchers to plan, design, and optimize reliable 5G wireless indoor networks. It will also motivate the researchers and engineering communities towards finding a better outcome in the future trends of the mmWave indoor wireless network for 6G systems and beyond. Full article
Show Figures

Figure 1

14 pages, 33093 KiB  
Article
Underwater Target Recognition Based on Improved YOLOv4 Neural Network
by Lingyu Chen, Meicheng Zheng, Shunqiang Duan, Weilin Luo and Ligang Yao
Electronics 2021, 10(14), 1634; https://doi.org/10.3390/electronics10141634 - 9 Jul 2021
Cited by 55 | Viewed by 5219
Abstract
The YOLOv4 neural network is employed for underwater target recognition. To improve the accuracy and speed of recognition, the structure of YOLOv4 is modified by replacing the upsampling module with a deconvolution module and by incorporating depthwise separable convolution into the network. Moreover, [...] Read more.
The YOLOv4 neural network is employed for underwater target recognition. To improve the accuracy and speed of recognition, the structure of YOLOv4 is modified by replacing the upsampling module with a deconvolution module and by incorporating depthwise separable convolution into the network. Moreover, the training set used in the YOLO network is preprocessed by using a modified mosaic augmentation, in which the gray world algorithm is used to derive two images when performing mosaic augmentation. The recognition results and the comparison with the other target detectors demonstrate the effectiveness of the proposed YOLOv4 structure and the method of data preprocessing. According to both subjective and objective evaluation, the proposed target recognition strategy can effectively improve the accuracy and speed of underwater target recognition and reduce the requirement of hardware performance as well. Full article
(This article belongs to the Special Issue Convolutional Neural Networks and Vision Applications)
Show Figures

Figure 1

21 pages, 11829 KiB  
Article
Implementing a Gaze Tracking Algorithm for Improving Advanced Driver Assistance Systems
by Agapito Ledezma, Víctor Zamora, Óscar Sipele, M. Paz Sesmero and Araceli Sanchis
Electronics 2021, 10(12), 1480; https://doi.org/10.3390/electronics10121480 - 19 Jun 2021
Cited by 22 | Viewed by 4469
Abstract
Car accidents are one of the top ten causes of death and are produced mainly by driver distractions. ADAS (Advanced Driver Assistance Systems) can warn the driver of dangerous scenarios, improving road safety, and reducing the number of traffic accidents. However, having a [...] Read more.
Car accidents are one of the top ten causes of death and are produced mainly by driver distractions. ADAS (Advanced Driver Assistance Systems) can warn the driver of dangerous scenarios, improving road safety, and reducing the number of traffic accidents. However, having a system that is continuously sounding alarms can be overwhelming or confusing or both, and can be counterproductive. Using the driver’s attention to build an efficient ADAS is the main contribution of this work. To obtain this “attention value” the use of a Gaze tracking is proposed. Driver’s gaze direction is a crucial factor in understanding fatal distractions, as well as discerning when it is necessary to warn the driver about risks on the road. In this paper, a real-time gaze tracking system is proposed as part of the development of an ADAS that obtains and communicates the driver’s gaze information. The developed ADAS uses gaze information to determine if the drivers are looking to the road with their full attention. This work gives a step ahead in the ADAS based on the driver, building an ADAS that warns the driver only in case of distraction. The gaze tracking system was implemented as a model-based system using a Kinect v2.0 sensor and was adjusted on a set-up environment and tested on a suitable-features driving simulation environment. The average obtained results are promising, having hit ratios between 96.37% and 81.84%. Full article
(This article belongs to the Special Issue AI-Based Autonomous Driving System)
Show Figures

Figure 1

16 pages, 9144 KiB  
Article
A Sub-6G SP32T Single-Chip Switch with Nanosecond Switching Speed for 5G Applications in 0.25 μm GaAs Technology
by Tianxiang Wu, Jipeng Wei, Hongquan Liu, Shunli Ma, Yong Chen and Junyan Ren
Electronics 2021, 10(12), 1482; https://doi.org/10.3390/electronics10121482 - 19 Jun 2021
Cited by 10 | Viewed by 4453
Abstract
This paper presents a single-pole 32-throw (SP32T) switch with an operating frequency of up to 6 GHz for 5G communication applications. Compared to the traditional SP32T module implemented by the waveguide package with large volume and power, the proposed switch can significantly simplify [...] Read more.
This paper presents a single-pole 32-throw (SP32T) switch with an operating frequency of up to 6 GHz for 5G communication applications. Compared to the traditional SP32T module implemented by the waveguide package with large volume and power, the proposed switch can significantly simplify the system with a smaller size and light weight. The proposed SP32T scheme utilizing tree structure can dramatically reduce the dc power and enhance isolation between different output ports, which makes it suitable for low-power 5G communication. A design methodology of a novel transmission (ABCD) matrix is proposed to optimize the switch, which can achieve low insertion loss and high isolation simultaneously. The average insertion loss and the isolations are 1.5 and 35 dB at 6 GHz operating frequency, respectively. The switch exhibits the measured input return loss which is better than 10 dB at 6 GHz. The 1 dB input compression point of SP32T is 15 dBm. The prototype is designed in 5 V 0.25 μm GaAs technology and occupies a small area of 12 mm2. Full article
(This article belongs to the Section Circuit and Signal Processing)
Show Figures

Figure 1

12 pages, 6793 KiB  
Article
Metal-Insulator-Metal Waveguide-Based Racetrack Integrated Circular Cavity for Refractive Index Sensing Application
by Muhammad A. Butt, Andrzej Kaźmierczak, Nikolay L. Kazanskiy and Svetlana N. Khonina
Electronics 2021, 10(12), 1419; https://doi.org/10.3390/electronics10121419 - 12 Jun 2021
Cited by 34 | Viewed by 6000
Abstract
Herein, a novel cavity design of racetrack integrated circular cavity established on metal-insulator-metal (MIM) waveguide is suggested for refractive index sensing application. Over the past few years, we have witnessed several unique cavity designs to improve the sensing performance of the plasmonic sensors [...] Read more.
Herein, a novel cavity design of racetrack integrated circular cavity established on metal-insulator-metal (MIM) waveguide is suggested for refractive index sensing application. Over the past few years, we have witnessed several unique cavity designs to improve the sensing performance of the plasmonic sensors created on the MIM waveguide. The optimized cavity design can provide the best sensing performance. In this work, we have numerically analyzed the device design by utilizing the finite element method (FEM). The small variations in the geometric parameter of the device can bring a significant shift in the sensitivity and the figure of merit (FOM) of the device. The best sensitivity and FOM of the anticipated device are 1400 nm/RIU and ~12.01, respectively. We believe that the sensor design analyzed in this work can be utilized in the on-chip detection of biochemical analytes. Full article
(This article belongs to the Special Issue Nanophotonics for Next-Generation IoT Sensors)
Show Figures

Figure 1

Back to TopTop