Next Issue
Volume 10, April-2
Previous Issue
Volume 10, March-2

Electronics, Volume 10, Issue 7 (April-1 2021) – 112 articles

Cover Story (view full-size image): Automation and data exchange in manufacturing are enabled by emerging technological advancements, including artificial intelligence, Internet of Things (IoT), cloud computing, and cyber-physical systems. To support data-driven decision making in the so-called “Industry 4.0”, new data-driven methods and algorithms are emerging to support operators in making informed, optimal decisions about maintenance and operational actions. Fueled by advanced cyber-physical systems, as well as cloud technologies for processing and storing data, next-generation decision making for maintenance, which is examined in this article, will be increasingly more responsive and capable of facilitating accurate and proactive decisions. View this paper

 

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
Article
Double-Threshold Segmentation of Panicle and Clustering Adaptive Density Estimation for Mature Rice Plants Based on 3D Point Cloud
Electronics 2021, 10(7), 872; https://doi.org/10.3390/electronics10070872 - 06 Apr 2021
Viewed by 548
Abstract
Crop density estimation ahead of the combine harvester provides a valuable reference for operators to keep the feeding amount stable in agriculture production, and, as a consequence, guaranteeing the working stability and improving the operation efficiency. For the current method depending on LiDAR, [...] Read more.
Crop density estimation ahead of the combine harvester provides a valuable reference for operators to keep the feeding amount stable in agriculture production, and, as a consequence, guaranteeing the working stability and improving the operation efficiency. For the current method depending on LiDAR, it is difficult to extract individual plants for mature rice plants with luxuriant branches and leaves, as well as bent and intersected panicles. Therefore, this paper proposes a clustering adaptive density estimation method based on the constructed LiDAR measurement system and double-threshold segmentation. The Otsu algorithm is adopted to construct a double-threshold according to elevation and inflection intensity in different parts of the rice plant, after reducing noise through the statistical outlier removal (SOR) algorithm. For adaptively parameter adjustment of supervoxel clustering and mean-shift clustering during density estimation, the calculation relationship between influencing factors (including seed-point size and kernel-bandwidth size) and number of points are, respectively, deduced by analysis. The experiment result of density estimation proved the two clustering methods effective, with a Root Mean Square Error (RMSE) of 9.968 and 5.877, and a Mean Absolute Percent Error (MAPE) of 5.67% and 3.37%, and the average accuracy was more than 90% and 95%, respectively. This estimation method is of positive significance for crop density measurement and could lay the foundation for intelligent harvest. Full article
(This article belongs to the Special Issue Electronics for Agriculture)
Show Figures

Figure 1

Article
Artificial Intelligence Applications in Military Systems and Their Influence on Sense of Security of Citizens
Electronics 2021, 10(7), 871; https://doi.org/10.3390/electronics10070871 - 06 Apr 2021
Cited by 2 | Viewed by 2106
Abstract
The paper presents an overview of current and expected prospects for the development of artificial intelligence algorithms, especially in military applications, and conducted research regarding applications in the area of civilian life. Attention was paid mainly to the use of AI algorithms in [...] Read more.
The paper presents an overview of current and expected prospects for the development of artificial intelligence algorithms, especially in military applications, and conducted research regarding applications in the area of civilian life. Attention was paid mainly to the use of AI algorithms in cybersecurity, object detection, military logistics and robotics. It discusses the problems connected with the present solutions and how artificial intelligence can help solve them. It briefly presents also mathematical structures and descriptions for ART, CNN and SVM networks as well as Expectation–Maximization and Gaussian Mixture Model algorithms that are used in solving of discussed problems. The third chapter discusses the attitude of society towards the use of neural network algorithms in military applications. The basic problems related to ethics in the application of artificial intelligence and issues of responsibility for errors made by autonomous systems are discussed. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

Article
The Control Method of Twin Delayed Deep Deterministic Policy Gradient with Rebirth Mechanism to Multi-DOF Manipulator
Electronics 2021, 10(7), 870; https://doi.org/10.3390/electronics10070870 - 06 Apr 2021
Cited by 1 | Viewed by 691
Abstract
As a research hotspot in the field of artificial intelligence, the application of deep reinforcement learning to the learning of the motion ability of a manipulator can help to improve the learning of the motion ability of a manipulator without a kinematic model. [...] Read more.
As a research hotspot in the field of artificial intelligence, the application of deep reinforcement learning to the learning of the motion ability of a manipulator can help to improve the learning of the motion ability of a manipulator without a kinematic model. To suppress the overestimation bias of values in Deep Deterministic Policy Gradient (DDPG) networks, the Twin Delayed Deep Deterministic Policy Gradient (TD3) was proposed. This paper further suppresses the overestimation bias of values for multi-degree of freedom (DOF) manipulator learning based on deep reinforcement learning. Twin Delayed Deep Deterministic Policy Gradient with Rebirth Mechanism (RTD3) was proposed. The experimental results show that RTD3 applied to multi degree freedom manipulators is in place, with an improved learning ability by 29.15% on the basis of TD3. In this paper, a step-by-step reward function is proposed specifically for the learning and innovation of the multi degree of freedom manipulator’s motion ability. The view of continuous decision-making and process problem is used to guide the learning of the manipulator, and the learning efficiency is improved by optimizing the playback of experience. In order to measure the point-to-point position motion ability of a manipulator, a new evaluation index based on the characteristics of the continuous decision process problem, energy efficiency distance, is presented in this paper, which can evaluate the learning quality of the manipulator motion ability by a more comprehensive and fair evaluation algorithm. Full article
(This article belongs to the Special Issue Advances in Robotic Mobile Manipulation)
Show Figures

Figure 1

Article
Open Source Control Device for Industry 4.0 Based on RAMI 4.0
Electronics 2021, 10(7), 869; https://doi.org/10.3390/electronics10070869 - 06 Apr 2021
Cited by 4 | Viewed by 755
Abstract
The technical innovation of the fourth industrial revolution (Industry 4.0—I4.0) is based on the following respective conditions: horizontal and vertical integration of manufacturing systems, decentralization of computing resources and continuous digital engineering throughout the product life cycle. The reference architecture model for Industry [...] Read more.
The technical innovation of the fourth industrial revolution (Industry 4.0—I4.0) is based on the following respective conditions: horizontal and vertical integration of manufacturing systems, decentralization of computing resources and continuous digital engineering throughout the product life cycle. The reference architecture model for Industry 4.0 (RAMI 4.0) is a common model for systematizing, structuring and mapping the complex relationships and functionalities required in I4.0 applications. Despite its adoption in I4.0 projects, RAMI 4.0 is an abstract model, not an implementation guide, which hinders its current adoption and full deployment. As a result, many papers have recently studied the interactions required among the elements distributed along the three axes of RAMI 4.0 to develop a solution compatible with the model. This paper investigates RAMI 4.0 and describes our proposal for the development of an open-source control device for I4.0 applications. The control device is one of the elements in the hierarchy-level axis of RAMI 4.0. Its main contribution is the integration of open-source solutions of hardware, software, communication and programming, covering the relationships among three layers of RAMI 4.0 (assets, integration and communication). The implementation of a proof of concept of the control device is discussed. Experiments in an I4.0 scenario were used to validate the operation of the control device and demonstrated its effectiveness and robustness without interruption, failure or communication problems during the experiments. Full article
Show Figures

Figure 1

Article
Facial Emotion Recognition from an Unmanned Flying Social Robot for Home Care of Dependent People
Electronics 2021, 10(7), 868; https://doi.org/10.3390/electronics10070868 - 06 Apr 2021
Viewed by 607
Abstract
This work is part of an ongoing research project to develop an unmanned flying social robot to monitor dependants at home in order to detect the person’s state and bring the necessary assistance. In this sense, this paper focuses on the description of [...] Read more.
This work is part of an ongoing research project to develop an unmanned flying social robot to monitor dependants at home in order to detect the person’s state and bring the necessary assistance. In this sense, this paper focuses on the description of a virtual reality (VR) simulation platform for the monitoring process of an avatar in a virtual home by a rotatory-wing autonomous unmanned aerial vehicle (UAV). This platform is based on a distributed architecture composed of three modules communicated through the message queue telemetry transport (MQTT) protocol: the UAV Simulator implemented in MATLAB/Simulink, the VR Visualiser developed in Unity, and the new emotion recognition (ER) system developed in Python. Using a face detection algorithm and a convolutional neural network (CNN), the ER System is able to detect the person’s face in the image captured by the UAV’s on-board camera and classify the emotion among seven possible ones (surprise; fear; happiness; sadness; disgust; anger; or neutral expression). The experimental results demonstrate the correct integration of this new computer vision module within the VR platform, as well as the good performance of the designed CNN, with around 85% in the F1-score, a mean of the precision and recall of the model. The developed emotion detection system can be used in the future implementation of the assistance UAV that monitors dependent people in a real environment, since the methodology used is valid for images of real people. Full article
(This article belongs to the Special Issue Applications and Trends in Social Robotics)
Show Figures

Figure 1

Review
Deep Learning Algorithms for Single Image Super-Resolution: A Systematic Review
Electronics 2021, 10(7), 867; https://doi.org/10.3390/electronics10070867 - 06 Apr 2021
Cited by 2 | Viewed by 940
Abstract
Image super-resolution has become an important technology recently, especially in the medical and industrial fields. As such, much effort has been given to develop image super-resolution algorithms. A recent method used was convolutional neural network (CNN) based algorithms. super-resolution convolutional neural network (SRCNN) [...] Read more.
Image super-resolution has become an important technology recently, especially in the medical and industrial fields. As such, much effort has been given to develop image super-resolution algorithms. A recent method used was convolutional neural network (CNN) based algorithms. super-resolution convolutional neural network (SRCNN) was the pioneer of CNN-based algorithms, and it continued being improved till today through different techniques. The techniques included the type of loss functions used, upsampling module deployed, and the adopted network design strategies. In this paper, a total of 18 articles were selected through the PRISMA standard. A total of 19 algorithms were found in the selected articles and were reviewed. A few aspects are reviewed and compared, including datasets used, loss functions used, evaluation metrics applied, upsampling module deployed, and adopted design techniques. For each upsampling module and design techniques, their respective advantages and disadvantages were also summarized. Full article
(This article belongs to the Special Issue Deep Learning Technologies for Machine Vision and Audition)
Show Figures

Figure 1

Article
Parallel Computation of CRC-Code on an FPGA Platform for High Data Throughput
Electronics 2021, 10(7), 866; https://doi.org/10.3390/electronics10070866 - 06 Apr 2021
Viewed by 591
Abstract
With the rapid advancement of radiation hard imaging technology, space-based remote sensing instruments are becoming not only more sophisticated but are also generating substantially more amounts of data for rapid processing. For applications that rely on data transmitted from a planetary probe to [...] Read more.
With the rapid advancement of radiation hard imaging technology, space-based remote sensing instruments are becoming not only more sophisticated but are also generating substantially more amounts of data for rapid processing. For applications that rely on data transmitted from a planetary probe to a relay spacecraft to Earth, alteration or discontinuity in data over a long transmission distance is likely to happen. Cyclic Redundancy Check (CRC) is one of the most well-known package error check techniques in sensor networks for critical applications. However, serial CRC computation could be a bottleneck of the throughput in such systems. In this work, we design, implement, and validate an efficient hybrid look-up-table and matrix transformation algorithm for high throughput parallel computational unit to speed-up the process of CRC computation using both CPU and Field Programmable Gate Array (FPGA) with comparison of both methods. Full article
(This article belongs to the Section Networks)
Show Figures

Figure 1

Article
Analysis of the K2 Scheduler for a Real-Time System with an SSD
Electronics 2021, 10(7), 865; https://doi.org/10.3390/electronics10070865 - 06 Apr 2021
Viewed by 597
Abstract
While an SSD (Solid State Drive) has been widely used for storage in many computing systems due to its small average latency, how to provide timing guarantees of a delay-sensitive (real-time) task on a real-time system equipped with an SSD has not been [...] Read more.
While an SSD (Solid State Drive) has been widely used for storage in many computing systems due to its small average latency, how to provide timing guarantees of a delay-sensitive (real-time) task on a real-time system equipped with an SSD has not been fully explored. A recent study has proposed a work-constraining I/O scheduler, called K2, which has succeeded in reducing the tail latency of a real-time task at the expense of compromising the total bandwidth for real-time and non-real-time tasks. Although the queue length bound parameter of the K2 scheduler is a key to regulate the tradeoff between a decrease in the tail latency of a real-time task and an increase in penalty of the total bandwidth, the parameter’s impact on the tradeoff has not been thoroughly investigated. In particular, no studies have addressed how the case of a fully occupied SSD that incurs garbage collection changes the performance of the K2 scheduler in terms of the tail latency of the real-time task and the total bandwidth. In this paper, we systematically analyze the performance of the K2 scheduler for different I/O operation types, based on experiments on Linux. We investigate how the performance is changed on a fully occupied SSD due to garbage collection. Utilizing the investigation, we draw general guidelines on how to select a proper setting of the queue length bound for better performance. Finally, we propose how to apply the guidelines to achieve target objectives that optimize the tail latency of the real-time task and the total bandwidth at the same time, which has not been achieved by previous studies. Full article
(This article belongs to the Special Issue Real-Time Systems, Cyber-Physical Systems and Applications)
Show Figures

Figure 1

Article
Low-Cost Implementation of Reactive Jammer on LoRaWAN Network
Electronics 2021, 10(7), 864; https://doi.org/10.3390/electronics10070864 - 05 Apr 2021
Viewed by 777
Abstract
The Low-Power Wide-Area Network (LPWA) has already started to gain a notorious adoption in the Internet of Things (IoT) landscape due to its enormous potential. It is already employed in a wide variety of scenarios involving parking lot occupancy, package delivery, smart irrigation, [...] Read more.
The Low-Power Wide-Area Network (LPWA) has already started to gain a notorious adoption in the Internet of Things (IoT) landscape due to its enormous potential. It is already employed in a wide variety of scenarios involving parking lot occupancy, package delivery, smart irrigation, smart lightning, fire detection, etc. If messages from LPWA devices can be manipulated or blocked, this will violate the integrity of the collected information and lead to unobserved events (e.g., fire, leakage). This paper explores the possibility that violates message integrity by applying a reactive jamming technique that disrupts a Long Range Wide Area Network (LoRaWAN) network. As shown in this paper, using low-cost commodity hardware based on Arduino platform, an attacker can easily mount such an attack that would result in completely shutting down the entire LoRaWAN network with high probability. Several countermeasures are introduced to reduce the possibility of jamming attacks. Full article
Show Figures

Figure 1

Article
The Analysis of SEU in Nanowire FETs and Nanosheet FETs
Electronics 2021, 10(7), 863; https://doi.org/10.3390/electronics10070863 - 05 Apr 2021
Viewed by 532
Abstract
The effects of the single-event upset (SEU) generated by radiation on nanowire field-effect transistors (NW-FETs) and nanosheet (NS)-FETs were analyzed according to the incident angle and location of radiation, by using three-dimensional technology computer-aided design tools. The greatest SEU occurred when the particle [...] Read more.
The effects of the single-event upset (SEU) generated by radiation on nanowire field-effect transistors (NW-FETs) and nanosheet (NS)-FETs were analyzed according to the incident angle and location of radiation, by using three-dimensional technology computer-aided design tools. The greatest SEU occurred when the particle was incident at 90°, whereas the least occurred at 15°. SEU was significantly affected when the particle was incident on the drain, as compared to when it was incident on the source. The NS-FETs were robust to SEU, unlike the NW-FETs. This phenomenon can be attributed to the difference in the area exposed to radiation, even if the channel widths of these devices were identical. Full article
(This article belongs to the Special Issue New CMOS Devices and Their Applications II)
Show Figures

Figure 1

Article
VLSI Implementation of a Cost-Efficient Loeffler DCT Algorithm with Recursive CORDIC for DCT-Based Encoder
Electronics 2021, 10(7), 862; https://doi.org/10.3390/electronics10070862 - 05 Apr 2021
Viewed by 517
Abstract
This paper presents a low-cost and high-quality, hardware-oriented, two-dimensional discrete cosine transform (2-D DCT) signal analyzer for image and video encoders. In order to reduce memory requirement and improve image quality, a novel Loeffler DCT based on a coordinate rotation digital computer (CORDIC) [...] Read more.
This paper presents a low-cost and high-quality, hardware-oriented, two-dimensional discrete cosine transform (2-D DCT) signal analyzer for image and video encoders. In order to reduce memory requirement and improve image quality, a novel Loeffler DCT based on a coordinate rotation digital computer (CORDIC) technique is proposed. In addition, the proposed algorithm is realized by a recursive CORDIC architecture instead of an unfolded CORDIC architecture with approximated scale factors. In the proposed design, a fully pipelined architecture is developed to efficiently increase operating frequency and throughput, and scale factors are implemented by using four hardware-sharing machines for complexity reduction. Thus, the computational complexity can be decreased significantly with only 0.01 dB loss deviated from the optimal image quality of the Loeffler DCT. Experimental results show that the proposed 2-D DCT spectral analyzer not only achieved a superior average peak signal–noise ratio (PSNR) compared to the previous CORDIC-DCT algorithms but also designed cost-efficient architecture for very large scale integration (VLSI) implementation. The proposed design was realized using a UMC 0.18-μm CMOS process with a synthesized gate count of 8.04 k and core area of 75,100 μm2. Its operating frequency was 100 MHz and power consumption was 4.17 mW. Moreover, this work had at least a 64.1% gate count reduction and saved at least 22.5% in power consumption compared to previous designs. Full article
(This article belongs to the Special Issue New Techniques for Image and Video Coding)
Show Figures

Figure 1

Article
Estimating Physical Activity Energy Expenditure Using an Ensemble Model-Based Patch-Type Sensor Module
Electronics 2021, 10(7), 861; https://doi.org/10.3390/electronics10070861 - 05 Apr 2021
Viewed by 479
Abstract
Chronic diseases, such as coronary artery disease and diabetes, are caused by inadequate physical activity and are the leading cause of increasing mortality and morbidity rates. Direct calorimetry by calorie production and indirect calorimetry by energy expenditure (EE) has been regarded as the [...] Read more.
Chronic diseases, such as coronary artery disease and diabetes, are caused by inadequate physical activity and are the leading cause of increasing mortality and morbidity rates. Direct calorimetry by calorie production and indirect calorimetry by energy expenditure (EE) has been regarded as the best method for estimating the physical activity and EE. However, this method is inconvenient, owing to the use of an oxygen respiration measurement mask. In this study, we propose a model that estimates physical activity EE using an ensemble model that combines artificial neural networks and genetic algorithms using the data acquired from patch-type sensors. The proposed ensemble model achieved an accuracy of more than 92% (Root Mean Squared Error (RMSE) = 0.1893, R2 = 0.91, Mean Squared Error (MSE) = 0.014213, Mean Absolute Error (MAE) = 0.14020) by testing various structures through repeated experiments. Full article
Show Figures

Figure 1

Article
A 6-Locking Cycles All-Digital Duty Cycle Corrector with Synchronous Input Clock
Electronics 2021, 10(7), 860; https://doi.org/10.3390/electronics10070860 - 05 Apr 2021
Cited by 1 | Viewed by 441
Abstract
This paper proposes an all-digital duty cycle corrector with synchronous fast locking, and adopts a new quantization method to effectively produce a phase of 180 degrees or half delay of the input clock. By taking two adjacent rising edges input to two delay [...] Read more.
This paper proposes an all-digital duty cycle corrector with synchronous fast locking, and adopts a new quantization method to effectively produce a phase of 180 degrees or half delay of the input clock. By taking two adjacent rising edges input to two delay lines, the total delay time of the delay line is twice the other delay line. This circuit uses a 0.18 μm CMOS process, and the overall chip area is 0.0613 mm2, while the input clock frequency is 500 MHz to 1000 MHz, and the acceptable input clock duty cycle range is 20% to 80%. Measurement results show that the output clock duty cycle is 50% ± 2.5% at a supply voltage of 1.8 V operating at 1000 MHz, the power consumed is 10.1 mW, with peak-to-peak jitter of 9.89 ps. Full article
(This article belongs to the Section Circuit and Signal Processing)
Show Figures

Figure 1

Article
A Data Mining and Analysis Platform for Investment Recommendations
Electronics 2021, 10(7), 859; https://doi.org/10.3390/electronics10070859 - 04 Apr 2021
Cited by 2 | Viewed by 989
Abstract
This article describes the development of a recommender system to obtain buy/sell signals from the results of technical analyses and of forecasts performed for companies operating in the Spanish continuous market. It has a modular design to facilitate the scalability of the model [...] Read more.
This article describes the development of a recommender system to obtain buy/sell signals from the results of technical analyses and of forecasts performed for companies operating in the Spanish continuous market. It has a modular design to facilitate the scalability of the model and the improvement of functionalities. The modules are: analysis and data mining, the forecasting system, the technical analysis module, the recommender system, and the visualization platform. The specification of each module is presented, as well as the dependencies and communication between them. Moreover, the proposal includes a visualization platform for high-level interaction between the user and the recommender system. This platform presents the conclusions that were abstracted from the resulting values. Full article
Show Figures

Figure 1

Article
Applications of Direct-Current Current–Voltage Method to Total Ionizing Dose Radiation Characterization in SOI NMOSFETs with Different Process Conditions
Electronics 2021, 10(7), 858; https://doi.org/10.3390/electronics10070858 - 04 Apr 2021
Viewed by 614
Abstract
As a promising candidate in space radiation hardened applications, silicon-on-insulator (SOI) devices face the severe problem of total ionizing dose (TID) radiation because of the thick buried oxide (BOX) layer. The direct-current current–voltage (DCIV) method was applied for studying TID radiation of SOI [...] Read more.
As a promising candidate in space radiation hardened applications, silicon-on-insulator (SOI) devices face the severe problem of total ionizing dose (TID) radiation because of the thick buried oxide (BOX) layer. The direct-current current–voltage (DCIV) method was applied for studying TID radiation of SOI metal–oxide–semiconductor field–effect transistors (MOSFETs) with different manufacture processes. It is found that the peak of high-voltage well (PX) devices shows a larger left-shift and a slower multitude increase along with radiation dose, compared with that of low-voltage well (PV) devices. It is illustrated that the high P-type impurity concentration near back interface makes it more difficult to break up silicon hydrogen bonds, which gives the PX devices superiority in resisting the build-up of interface traps. The study results indicate that increasing doping concentration of the body region near the back-gate interface might be an alternative radiation hardening technique of SOI MOSFET devices to avoid the parasitic back transistors’ leakage. Full article
(This article belongs to the Section Semiconductor Devices)
Show Figures

Figure 1

Review
Opto-Electronic Oscillators for Micro- and Millimeter Wave Signal Generation
Electronics 2021, 10(7), 857; https://doi.org/10.3390/electronics10070857 - 03 Apr 2021
Cited by 1 | Viewed by 914
Abstract
High-frequency signal oscillators are devices needed for a variety of scientific disciplines. One of their fundamental requirements is low phase noise in the micro- and millimeter wave ranges. The opto-electronic oscillator (OEO) is a good candidate for this, as it is capable of [...] Read more.
High-frequency signal oscillators are devices needed for a variety of scientific disciplines. One of their fundamental requirements is low phase noise in the micro- and millimeter wave ranges. The opto-electronic oscillator (OEO) is a good candidate for this, as it is capable of generating a signal with very low phase noise in the micro- and millimeter wave ranges. The OEO consists of an optical resonator with electrical feedback components. The optical components form a delay line, which has the advantage that the phase noise is independent of the oscillator’s frequency. Furthermore, by using a long delay line, the phase noise characteristics of the oscillator are improved. This makes it possible to widen the range of possible OEO applications. In this paper we have reviewed the state of the art for OEOs and micro- and millimeter wave signal generation as well as new developments for OEOs and the use of OEOs in a variety of applications. In addition, a possible implementation of a centralized OEO signal distribution as a local oscillator for a 5G radio access network (RAN) is demonstrated. Full article
(This article belongs to the Section Microelectronics)
Show Figures

Figure 1

Article
Impact of People’s Movement on Wi-Fi Link Throughput in Indoor Propagation Environments: An Empirical Study
Electronics 2021, 10(7), 856; https://doi.org/10.3390/electronics10070856 - 03 Apr 2021
Viewed by 552
Abstract
There has been tremendous growth in the deployment of Wi-Fi 802.11-based networks in recent years. Many researchers have been investigating the performance of the Wi-Fi 802.11-based networks by exploring factors such as signal interference, radio propagation environments, and wireless protocols. However, exploring the [...] Read more.
There has been tremendous growth in the deployment of Wi-Fi 802.11-based networks in recent years. Many researchers have been investigating the performance of the Wi-Fi 802.11-based networks by exploring factors such as signal interference, radio propagation environments, and wireless protocols. However, exploring the effect of people’s movement on the Wi-Fi link throughout the performance is still a potential area yet to be explored. This paper investigates the impact of people’s movement on Wi-Fi link throughput. This is achieved by setting up experimental scenarios by using a pair of wireless laptops to file share where there is human movement between the two nodes. Wi-Fi link throughput is measured in an obstructed office block, laboratory, library, and suburban residential home environments. The collected data from the experimental study show that the performance difference between fixed and random human movement had an overall average of 2.21 ± 0.07 Mbps. Empirical results show that the impact of people’s movement (fixed and random people movements) on Wi-Fi link throughput is insignificant. The findings reported in this paper provide some insights into the effect of human movement on Wi-Fi throughputs that can help network planners for the deployment of next generation Wi-Fi systems. Full article
(This article belongs to the Special Issue 10th Anniversary of Electronics: Advances in Networks)
Show Figures

Figure 1

Article
Optimization of Oxygen Plasma Treatment on Ohmic Contact for AlGaN/GaN HEMTs on High-Resistivity Si Substrate
Electronics 2021, 10(7), 855; https://doi.org/10.3390/electronics10070855 - 03 Apr 2021
Viewed by 679
Abstract
The oxygen plasma surface treatment prior to ohmic metal deposition was developed to reduce the ohmic contact resistance (RC) for AlGaN/GaN high electron mobility transistors (HEMTs) on a high-resistive Si substrate. The oxygen plasma, which was produced by an inductively coupled [...] Read more.
The oxygen plasma surface treatment prior to ohmic metal deposition was developed to reduce the ohmic contact resistance (RC) for AlGaN/GaN high electron mobility transistors (HEMTs) on a high-resistive Si substrate. The oxygen plasma, which was produced by an inductively coupled plasma (ICP) etching system, has been optimized by varying the combination of radio frequency (RF) and ICP power. By using the transmission line method (TLM) measurement, an ohmic contact resistance of 0.34 Ω∙mm and a specific contact resistivity (ρC) of 3.29 × 10–6 Ω∙cm2 was obtained with the optimized oxygen plasma conditions (ICP power of 250 W, RF power of 75 W, 0.8 Pa, O2 flow of 30 cm3/min, 5 min), which was about 74% lower than that of the reference sample. Atomic force microscopy (AFM), energy dispersive X-ray spectroscopy (EDX), and photoluminescence (PL) measurements revealed that a large nitrogen vacancy, which was induced near the surface by the oxygen plasma treatment, was the primary factor in the formation of low ohmic contact. Finally, this plasma treatment has been integrated into the HEMTs process, with a maximum drain saturation current of 0.77 A/mm obtained using gate bias at 2 V on AlGaN/GaN HEMTs. Oxygen plasma treatment is a simple and efficient approach, without the requirement of an additional mask or etch process, and shows promise to improve the Direct Current (DC) and RF performance for AlGaN/GaN HEMTs. Full article
(This article belongs to the Section Semiconductor Devices)
Show Figures

Figure 1

Article
Towards Controlled Transmission: A Novel Power-Based Sparsity-Aware and Energy-Efficient Clustering for Underwater Sensor Networks in Marine Transport Safety
Electronics 2021, 10(7), 854; https://doi.org/10.3390/electronics10070854 - 03 Apr 2021
Cited by 1 | Viewed by 678
Abstract
Energy-efficient management and highly reliable communication and transmission mechanisms are major issues in Underwater Wireless Sensor Networks (UWSN) due to the limited battery power of UWSN nodes within an harsh underwater environment. In this paper, we integrate the three main techniques that have [...] Read more.
Energy-efficient management and highly reliable communication and transmission mechanisms are major issues in Underwater Wireless Sensor Networks (UWSN) due to the limited battery power of UWSN nodes within an harsh underwater environment. In this paper, we integrate the three main techniques that have been used for managing Transmission Power-based Sparsity-conscious Energy-Efficient Clustering (CTP-SEEC) in UWSNs. These incorporate the adaptive power control mechanism that converts to a suitable Transmission Power Level (TPL), and deploys collaboration mobile sinks or Autonomous Underwater Vehicles (AUVs) to gather information locally to achieve energy and data management efficiency (Security) in the WSN. The proposed protocol is rigorously evaluated through extensive simulations and is validated by comparing it with state-of-the-art UWSN protocols. The simulation results are based on the static environmental condition, which shows that the proposed protocol performs well in terms of network lifetime, packet delivery, and throughput. Full article
(This article belongs to the Section Networks)
Show Figures

Figure 1

Article
A Simulated Annealing Algorithm and Grid Map-Based UAV Coverage Path Planning Method for 3D Reconstruction
Electronics 2021, 10(7), 853; https://doi.org/10.3390/electronics10070853 - 02 Apr 2021
Cited by 4 | Viewed by 705
Abstract
With the extensive application of 3D maps, acquiring high-quality images with unmanned aerial vehicles (UAVs) for precise 3D reconstruction has become a prominent topic of study. In this research, we proposed a coverage path planning method for UAVs to achieve full coverage of [...] Read more.
With the extensive application of 3D maps, acquiring high-quality images with unmanned aerial vehicles (UAVs) for precise 3D reconstruction has become a prominent topic of study. In this research, we proposed a coverage path planning method for UAVs to achieve full coverage of a target area and to collect high-resolution images while considering the overlap ratio of the collected images and energy consumption of clustered UAVs. The overlap ratio of the collected image set is guaranteed through a map decomposition method, which can ensure that the reconstruction results will not get affected by model breaking. In consideration of the small battery capacity of common commercial quadrotor UAVs, ray-scan-based area division was adopted to segment the target area, and near-optimized paths in subareas were calculated by a simulated annealing algorithm to find near-optimized paths, which can achieve balanced task assignment for UAV formations and minimum energy consumption for each UAV. The proposed system was validated through a site experiment and achieved a reduction in path length of approximately 12.6% compared to the traditional zigzag path. Full article
(This article belongs to the Special Issue Advances in SLAM and Data Fusion for UAVs/Drones)
Show Figures

Figure 1

Article
Human Signature Identification Using IoT Technology and Gait Recognition
Electronics 2021, 10(7), 852; https://doi.org/10.3390/electronics10070852 - 02 Apr 2021
Cited by 4 | Viewed by 1015
Abstract
This study aimed to develop an autonomous design system for recognizing the subject by gait posture. Gait posture is a type of non-verbal communication characteristic of each person, and can be considered a signature used in identification. This system can be used for [...] Read more.
This study aimed to develop an autonomous design system for recognizing the subject by gait posture. Gait posture is a type of non-verbal communication characteristic of each person, and can be considered a signature used in identification. This system can be used for diagnosis. The system helps aging or disabled subjects to identify incorrect posture to recover the gait. Gait posture gives information for subject identification using leg movements and step distance as characteristic parameters. In the current study, the inertial measurement units (IMUs) located in a mobile phone were used to provide information about the movement of the upper and lower leg parts. A resistive flex sensor (RFS) was used to obtain information about the foot contact with the ground. The data were collected from a target group comprising subjects of different age, height, and mass. A comparative study was undertaken to identify the subject after the gait posture. Statistical analysis and a machine learning algorithm were used for data processing. The errors obtained after training data are presented at the end of the paper and the obtained results are encouraging. This article proposes a method of acquiring data available to anyone by using indispensable devices purchased by all users such as mobile phones. Full article
(This article belongs to the Special Issue Human Computer Interaction and Its Future)
Show Figures

Figure 1

Article
Informing Piano Multi-Pitch Estimation with Inferred Local Polyphony Based on Convolutional Neural Networks
Electronics 2021, 10(7), 851; https://doi.org/10.3390/electronics10070851 - 02 Apr 2021
Viewed by 787
Abstract
In this work, we propose considering the information from a polyphony for multi-pitch estimation (MPE) in piano music recordings. To that aim, we propose a method for local polyphony estimation (LPE), which is based on convolutional neural networks (CNNs) trained in a supervised [...] Read more.
In this work, we propose considering the information from a polyphony for multi-pitch estimation (MPE) in piano music recordings. To that aim, we propose a method for local polyphony estimation (LPE), which is based on convolutional neural networks (CNNs) trained in a supervised fashion to explicitly predict the degree of polyphony. We investigate two feature representations as inputs to our method, in particular, the Constant-Q Transform (CQT) and its recent extension Folded-CQT (F-CQT). To evaluate the performance of our method, we conduct a series of experiments on real and synthetic piano recordings based on the MIDI Aligned Piano Sounds (MAPS) and the Saarland Music Data (SMD) datasets. We compare our approaches with a state-of-the art piano transcription method by informing said method with the LPE knowledge in a postprocessing stage. The experimental results suggest that using explicit LPE information can refine MPE predictions. Furthermore, it is shown that, on average, the CQT representation is preferred over F-CQT for LPE. Full article
(This article belongs to the Special Issue Machine Learning Applied to Music/Audio Signal Processing)
Show Figures

Figure 1

Article
An Interpretable Deep Learning Model for Automatic Sound Classification
Electronics 2021, 10(7), 850; https://doi.org/10.3390/electronics10070850 - 02 Apr 2021
Cited by 2 | Viewed by 1417
Abstract
Deep learning models have improved cutting-edge technologies in many research areas, but their black-box structure makes it difficult to understand their inner workings and the rationale behind their predictions. This may lead to unintended effects, such as being susceptible to adversarial attacks or [...] Read more.
Deep learning models have improved cutting-edge technologies in many research areas, but their black-box structure makes it difficult to understand their inner workings and the rationale behind their predictions. This may lead to unintended effects, such as being susceptible to adversarial attacks or the reinforcement of biases. There is still a lack of research in the audio domain, despite the increasing interest in developing deep learning models that provide explanations of their decisions. To reduce this gap, we propose a novel interpretable deep learning model for automatic sound classification, which explains its predictions based on the similarity of the input to a set of learned prototypes in a latent space. We leverage domain knowledge by designing a frequency-dependent similarity measure and by considering different time-frequency resolutions in the feature space. The proposed model achieves results that are comparable to that of the state-of-the-art methods in three different sound classification tasks involving speech, music, and environmental audio. In addition, we present two automatic methods to prune the proposed model that exploit its interpretability. Our system is open source and it is accompanied by a web application for the manual editing of the model, which allows for a human-in-the-loop debugging approach. Full article
(This article belongs to the Special Issue Machine Learning Applied to Music/Audio Signal Processing)
Show Figures

Figure 1

Article
UFMC Waveform and Multiple-Access Techniques for 5G RadCom
Electronics 2021, 10(7), 849; https://doi.org/10.3390/electronics10070849 - 02 Apr 2021
Viewed by 620
Abstract
In recent years, multiple functions traditionally realized by hardware components have been replaced by digital-signal processing, making radar and wireless communication technologies more similar. A joint radar and communication system, referred to as a RadCom system, was proposed to overcome the drawbacks of [...] Read more.
In recent years, multiple functions traditionally realized by hardware components have been replaced by digital-signal processing, making radar and wireless communication technologies more similar. A joint radar and communication system, referred to as a RadCom system, was proposed to overcome the drawbacks of the conventional existent radar techniques while using the same system for intervehicular communication. Consequently, this system enhances used spectral resources. Conventional orthogonal frequency division multiplexing (OFDM) was proposed as a RadCom waveform. However, due to OFDM’s multiple shortcomings, we propose universal filtered multicarrier (UFMC), a new 5G waveform candidate, as a RadCom waveform that offers a good trade-off between performance and complexity. In addition to that, we propose multicarrier code division multiple access (MC-CDMA) as a multiple-access (MA) technique that can offer great performance in terms of multiuser detection and power efficiency. Moreover, we study how UFMC filter length and MC-CDMA spreading sequences can impact overall performance on both radar and communication separately under a multipath channel. Analysis of the bit error rate (BER) of the UFMC waveform was performed in order to confirm the experiment results. Full article
(This article belongs to the Section Microwave and Wireless Communications)
Show Figures

Figure 1

Article
CoLL-IoT: A Collaborative Intruder Detection System for Internet of Things Devices
Electronics 2021, 10(7), 848; https://doi.org/10.3390/electronics10070848 - 02 Apr 2021
Viewed by 569
Abstract
The Internet of Things (IoT) and its applications are becoming popular among many users nowadays, as it makes their life easier. Because of its popularity, attacks that target these devices have increased dramatically, which might cause the entire system to be unavailable. Some [...] Read more.
The Internet of Things (IoT) and its applications are becoming popular among many users nowadays, as it makes their life easier. Because of its popularity, attacks that target these devices have increased dramatically, which might cause the entire system to be unavailable. Some of these attacks are denial of service attack, sybil attack, man in the middle attack, and replay attack. Therefore, as the attacks have increased, the detection solutions to detect malware in the IoT have also increased. Most of the current solutions often have very serious limitations, and malware is becoming more apt in taking advantage of them. Therefore, it is important to develop a tool to overcome the existing limitations of current detection systems. This paper presents CoLL-IoT, a CoLLaborative intruder detection system that detects malicious activities in IoT devices. CoLL-IoT consists of the following four main layers: IoT layer, network layer, fog layer, and cloud layer. All of the layers work collaboratively by monitoring and analyzing all of the network traffic generated and received by IoT devices. CoLL-IoT brings the detection system close to the IoT devices by taking the advantage of edge computing and fog computing paradigms. The proposed system was evaluated on the UNSW-NB15 dataset that has more than 175,000 records and achieved an accuracy of up to 98% with low type II error rate of 0.01. The evaluation results showed that CoLL-IoT outperformed the other existing tools, such as Dendron, which was also evaluated on the UNSW-NB15 dataset. Full article
Show Figures

Figure 1

Article
Supporting SLA via Adaptive Mapping and Heterogeneous Storage Devices in Ceph
Electronics 2021, 10(7), 847; https://doi.org/10.3390/electronics10070847 - 02 Apr 2021
Viewed by 580
Abstract
This paper proposes a new resource management scheme that supports SLA (Service-Level Agreement) in a bigdata distributed storage system. Basically, it makes use of two mapping modes, isolated mode and shared mode, in an adaptive manner. In specific, to ensure different QoS (Quality [...] Read more.
This paper proposes a new resource management scheme that supports SLA (Service-Level Agreement) in a bigdata distributed storage system. Basically, it makes use of two mapping modes, isolated mode and shared mode, in an adaptive manner. In specific, to ensure different QoS (Quality of Service) requirements among clients, it isolates storage devices so that urgent clients are not interfered by normal clients. When there is no urgent client, it switches to the shared mode so that normal clients can access all storage devices, thus achieving full performance. To provide this adaptability effectively, it devises two techniques, called logical cluster and normal inclusion. In addition, this paper explores how to exploit heterogeneous storage devices, HDDs (Hard Disk Drives) and SSDs (Solid State Drives), to support SLA. It examines two use cases and observes that separating data and metadata into different devices gives a positive impact on the performance per cost ratio. Real implementation-based evaluation results show that this proposal can satisfy the requirements of diverse clients and can provide better performance compared with a fixed mapping-based scheme. Full article
Show Figures

Figure 1

Article
Real-Time Prediction of Capacity Fade and Remaining Useful Life of Lithium-Ion Batteries Based on Charge/Discharge Characteristics
Electronics 2021, 10(7), 846; https://doi.org/10.3390/electronics10070846 - 01 Apr 2021
Cited by 4 | Viewed by 867
Abstract
We propose a robust and reliable method based on deep neural networks to estimate the remaining useful life of lithium-ion batteries in electric vehicles. In general, the degradation of a battery can be predicted by monitoring its internal resistance. However, prediction under battery [...] Read more.
We propose a robust and reliable method based on deep neural networks to estimate the remaining useful life of lithium-ion batteries in electric vehicles. In general, the degradation of a battery can be predicted by monitoring its internal resistance. However, prediction under battery operation cannot be achieved using conventional methods such as electrochemical impedance spectroscopy. The battery state can be predicted based on the change in the capacity according to the state of health. For the proposed method, a statistical analysis of capacity fade considering the impedance increase according to the degree of deterioration is conducted by applying a deep neural network to diverse data from charge/discharge characteristics. Then, probabilistic predictions based on the capacity fade trends are obtained to improve the prediction accuracy of the remaining useful life using another deep neural network. Full article
(This article belongs to the Special Issue Advances in Control for Electric Vehicle)
Show Figures

Figure 1

Article
An Empirical Study of Korean Sentence Representation with Various Tokenizations
Electronics 2021, 10(7), 845; https://doi.org/10.3390/electronics10070845 - 01 Apr 2021
Cited by 1 | Viewed by 663
Abstract
It is important how the token unit is defined in a sentence in natural language process tasks, such as text classification, machine translation, and generation. Many studies recently utilized the subword tokenization in language models such as BERT, KoBERT, and ALBERT. Although these [...] Read more.
It is important how the token unit is defined in a sentence in natural language process tasks, such as text classification, machine translation, and generation. Many studies recently utilized the subword tokenization in language models such as BERT, KoBERT, and ALBERT. Although these language models achieved state-of-the-art results in various NLP tasks, it is not clear whether the subword tokenization is the best token unit for Korean sentence embedding. Thus, we carried out sentence embedding based on word, morpheme, subword, and submorpheme, respectively, on Korean sentiment analysis. We explored the two-sentence representation methods for sentence embedding: considering the order of tokens in a sentence and not considering the order. While inputting a sentence, which is decomposed by token unit, to the two-sentence representation methods, we construct the sentence embedding with various tokenizations to find the most effective token unit for Korean sentence embedding. In our work, we confirmed: the robustness of the subword unit for out-of-vocabulary (OOV) problems compared to other token units, the disadvantage of replacing whitespace with a particular symbol in the sentiment analysis task, and that the optimal vocabulary size is 16K in subword and submorpheme tokenization. We empirically noticed that the subword, which was tokenized by a vocabulary size of 16K without replacement of whitespace, was the most effective for sentence embedding on the Korean sentiment analysis task. Full article
Article
An IOTA-Based Service Discovery Framework for Fog Computing
Electronics 2021, 10(7), 844; https://doi.org/10.3390/electronics10070844 - 01 Apr 2021
Viewed by 964
Abstract
With the rise in fog computing, users are no longer restricted to only accessing resources located in central and distant clouds and can request services from neighboring fog nodes distributed over networks. This can effectively reduce the network latency of service responses and [...] Read more.
With the rise in fog computing, users are no longer restricted to only accessing resources located in central and distant clouds and can request services from neighboring fog nodes distributed over networks. This can effectively reduce the network latency of service responses and the load of data centers. Furthermore, it can prevent the Internet’s bandwidth from being used up due to massive data flows from end users to clouds. However, fog-computing resources are distributed over multiple levels of networks and are managed by different owners. Consequently, the problem of service discovery becomes quite complicated. For resolving this problem, a decentralized service discovery method is required. Accordingly, this research proposes a service discovery framework based on the distributed ledger technology of IOTA. The proposed framework enables clients to directly search for service nodes through any node in the IOTA Mainnet to achieve the goals of public access and high availability and avoid network attacks to distributed hash tables that are popularly used for service discovery. Moreover, clients can obtain more comprehensive information by visiting known nodes and select a fog node able to provide services with the shortest latency. Our experimental results have shown that the proposed framework is cost-effective for distributed service discovery due to the advantages of IOTA. On the other hand, it can indeed enable clients to obtain higher service quality by automatic node selection. Full article
(This article belongs to the Special Issue Blockchain Technology and Its Applications)
Show Figures

Figure 1

Article
Channel Sounding and Scene Classification of Indoor 6G Millimeter Wave Channel Based on Machine Learning
Electronics 2021, 10(7), 843; https://doi.org/10.3390/electronics10070843 - 01 Apr 2021
Viewed by 563
Abstract
Millimeter wave, especially the high frequency millimeter wave near 100 GHz, is one of the key spectrum resources for the sixth generation (6G) mobile communication, which can be used for precise positioning, imaging and large capacity data transmission. Therefore, high frequency millimeter wave [...] Read more.
Millimeter wave, especially the high frequency millimeter wave near 100 GHz, is one of the key spectrum resources for the sixth generation (6G) mobile communication, which can be used for precise positioning, imaging and large capacity data transmission. Therefore, high frequency millimeter wave channel sounding is the first step to better understand 6G signal propagation. Because indoor wireless deployment is critical to 6G and different scenes classification can make future radio network optimization easy, we built a 6G indoor millimeter wave channel sounding system using just commercial instruments based on time-domain correlation method. Taking transmission and reception of a typical 93 GHz millimeter wave signal in the W-band as an example, four indoor millimeter wave communication scenes were modeled. Furthermore, we proposed a data-driven supervised machine learning method to extract fingerprint features from different scenes. Then we trained the scene classification model based on these features. Baseband data from receiver was transformed to channel Power Delay Profile (PDP), and then six fingerprint features were extracted for each scene. The decision tree, Support Vector Machine (SVM) and the optimal bagging channel scene classification algorithms were used to train machine learning model, with test accuracies of 94.3%, 86.4% and 96.5% respectively. The results show that the channel fingerprint classification model trained by machine learning method is effective. This method can be used in 6G channel sounding and scene classification to THz in the future. Full article
(This article belongs to the Section Microwave and Wireless Communications)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop