Next Issue
Volume 9, October
Previous Issue
Volume 9, August
 
 

Electronics, Volume 9, Issue 9 (September 2020) – 223 articles

Cover Story (view full-size image): A low-cost high-efficiency ultra-wideband (UWB) cavity-backed spiral antenna is proposed. It employs an equiangular spiral enclosed by an Archimedean spiral and it is fed through a tapered microstrip balun. A center-raised cylindrical absorber-free cavity backs the spiral to minimize the backward radiation without decreasing the efficiency. The cavity is designed to ensure an impedance bandwidth exceeding 16:1 ratio (from 350 MHz to 5.5 GHz). Simulated and measured results are presented and compared, demonstrating competitive performance in terms of impedance bandwidth and efficiency. Time–domain measurements indicate fidelity of 0.62 at boresight. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
30 pages, 5536 KiB  
Article
A Modified Topology of a High Efficiency Bidirectional Type DC–DC Converter by Synchronous Rectification
by Somalinga S Sethuraman, KR. Santha, Lucian Mihet-Popa and C. Bharatiraja
Electronics 2020, 9(9), 1555; https://doi.org/10.3390/electronics9091555 - 22 Sep 2020
Cited by 7 | Viewed by 2992
Abstract
A modified Topology to acquire high efficiency of a bidirectional method of DC–DC converter of non-isolated approach is proposed. The modified circuit involves four numbers of switches with their body diodes, passive elements as two inductors as well as a capacitor and the [...] Read more.
A modified Topology to acquire high efficiency of a bidirectional method of DC–DC converter of non-isolated approach is proposed. The modified circuit involves four numbers of switches with their body diodes, passive elements as two inductors as well as a capacitor and the circuit arrangements double boost converters to progress the voltage gain. The input current of the proposed topology divided amongst the two dissimilar values of inductors produces greater efficiency. In the step-down mode, an apparent lessening in voltage gain and also enhanced efficiency can be realized in the recommended system by expending a synchronous rectification. The modified topology shields the technique for presentation of easy control configurations and is used for truncated output voltage with a large current of energy storage systems in the renewable applications as well as hybrid energy source electric vehicle applications. The simulation of the projected structure has been conducted through MATLAB/Simulink software and has been corroborated through a 12 V/180 V, 200 Watts experimental prototype circuit. Full article
(This article belongs to the Section Power Electronics)
Show Figures

Figure 1

20 pages, 6805 KiB  
Article
Digital System Performance Enhancement of a Tent Map-Based ADC for Monitoring Photovoltaic Systems
by Philippa Hazell, Peter Mather, Andrew Longstaff and Simon Fletcher
Electronics 2020, 9(9), 1554; https://doi.org/10.3390/electronics9091554 - 22 Sep 2020
Cited by 2 | Viewed by 2009
Abstract
Efficient photovoltaic installations require control systems that detect small signal variations over large measurement ranges. High measurement accuracy requires data acquisition systems with high-resolution analogue-to-digital converters; however, high resolutions and operational speeds generally increase costs. Research has proven low-cost prototyping of non-linear chaotic [...] Read more.
Efficient photovoltaic installations require control systems that detect small signal variations over large measurement ranges. High measurement accuracy requires data acquisition systems with high-resolution analogue-to-digital converters; however, high resolutions and operational speeds generally increase costs. Research has proven low-cost prototyping of non-linear chaotic Tent Map-based analogue-to-digital converters (which fold and amplify the input signal, emphasizing small signal variations) is feasible, but inherent non-ideal Tent Map gains reduce the output accuracy and restrict adoption within data acquisition systems. This paper demonstrates a novel compensation algorithm, developed as a digital electronic system, for non-ideal Tent Map gain, enabling high accuracy estimation of the analogue-to-digital converter analogue input signal. Approximation of the gain difference compensation values (reducing digital hardware requirements, enabling efficient real-time compensation), were also investigated via simulation. The algorithm improved the effective resolution of a 16, 20 and 24 Tent Map-stage analogue-to-digital converter model from an average of 5 to 15.5, 19.2, and 23 bits, respectively, over the Tent Map gain range of 1.9 to 1.99. The simulated digital compensation system for a seven Tent Map-stage analogue-to-digital converter enhanced the accuracy from 4 to 7 bits, confirming real-time compensation for non-ideal gain in Tent Map-based analogue-to-digital converters was achievable. Full article
(This article belongs to the Special Issue Reliability Analysis for Photovoltaic Systems)
Show Figures

Figure 1

11 pages, 1316 KiB  
Article
Readily Design and Try-On Garments by Manipulating Segmentation Images
by Yoojin Jeong and Chae-Bong Sohn
Electronics 2020, 9(9), 1553; https://doi.org/10.3390/electronics9091553 - 22 Sep 2020
Cited by 2 | Viewed by 4532
Abstract
Recently, fashion industries have introduced artificial intelligence to provide new services, and research to combine fashion design and artificial intelligence has been continuously conducted. Among them, generative adversarial networks that synthesize realistic-looking images have been widely applied in the fashion industry. In this [...] Read more.
Recently, fashion industries have introduced artificial intelligence to provide new services, and research to combine fashion design and artificial intelligence has been continuously conducted. Among them, generative adversarial networks that synthesize realistic-looking images have been widely applied in the fashion industry. In this paper, a new apparel image is created using a generative model that can apply a new style to a desired area in a segmented image. It also creates a new fashion image by manipulating the segmentation image. Thus, interactive fashion image manipulation, which enables users to edit images by controlling segmentation images, is possible. This allows people to try new styles without the pain of inconvenient travel or changing clothes. Furthermore, they can easily determine which color and pattern suits the clothes they wear more, or whether the clothes other people wear match their clothes. Therefore, user-centered fashion design is possible. It is useful for virtually trying on or recommending clothes. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

18 pages, 4584 KiB  
Article
Sensing-HH: A Deep Hybrid Attention Model for Footwear Recognition
by Yumin Yao, Ya Wen and Jianxin Wang
Electronics 2020, 9(9), 1552; https://doi.org/10.3390/electronics9091552 - 22 Sep 2020
Viewed by 2297
Abstract
The human gait pattern is an emerging biometric trait for user identification of smart devices. However, one of the challenges in this biometric domain is the gait pattern change caused by footwear, especially if the users are wearing high heels (HH). Wearing HH [...] Read more.
The human gait pattern is an emerging biometric trait for user identification of smart devices. However, one of the challenges in this biometric domain is the gait pattern change caused by footwear, especially if the users are wearing high heels (HH). Wearing HH puts extra stress and pressure on various parts of the human body and it alters the wearer’s common gait pattern, which may cause difficulties in gait recognition. In this paper, we propose the Sensing-HH, a deep hybrid attention model for recognizing the subject’s shoes, flat or different types of HH, using smartphone’s motion sensors. In this model, two streams of convolutional and bidirectional long short-term memory (LSTM) networks are designed as the backbone, which extract the hierarchical spatial and temporal representations of accelerometer and gyroscope individually. We also introduce a spatio attention mechanism into the stacked convolutional layers to scan the crucial structure of the data. This mechanism enables the hybrid neural networks to capture extra information from the signal and thus it is able to significantly improve the discriminative power of the classifier for the footwear recognition task. To evaluate Sensing-HH, we built a dataset with 35 young females, each of whom walked for 4 min wearing shoes with varied heights of the heels. We conducted extensive experiments and the results demonstrated that the Sensing-HH outperformed the baseline models on leave-one-subject-out cross-validation (LOSO-CV). The Sensing-HH achieved the best Fm score, which was 0.827 when the smartphone was attached to the waist. This outperformed all the baseline methods at least by more than 14%. Meanwhile, the F1 Score of the Ultra HH was as high as 0.91. The results suggest the proposed model has made the footwear recognition more efficient and automated. We hope the findings from this study paves the way for a more sophisticated application using data from motion sensors, as well as lead to a path to a more robust biometric system based on gait pattern. Full article
(This article belongs to the Special Issue Ubiquitous Sensor Networks)
Show Figures

Figure 1

11 pages, 379 KiB  
Article
Big-But-Biased Data Analytics for Air Quality
by Laura Borrajo and Ricardo Cao
Electronics 2020, 9(9), 1551; https://doi.org/10.3390/electronics9091551 - 22 Sep 2020
Cited by 4 | Viewed by 2478
Abstract
Air pollution is one of the big concerns for smart cities. The problem of applying big data analytics to sampling bias in the context of urban air quality is studied in this paper. A nonparametric estimator that incorporates kernel density estimation is used. [...] Read more.
Air pollution is one of the big concerns for smart cities. The problem of applying big data analytics to sampling bias in the context of urban air quality is studied in this paper. A nonparametric estimator that incorporates kernel density estimation is used. When ignoring the biasing weight function, a small-sized simple random sample of the real population is assumed to be additionally observed. The general parameter considered is the mean of a transformation of the random variable of interest. A new bootstrap algorithm is used to approximate the mean squared error of the new estimator. Its minimization leads to an automatic bandwidth selector. The method is applied to a real data set concerning the levels of different pollutants in the urban air of the city of A Coruña (Galicia, NW Spain). Estimations for the mean and the cumulative distribution function of the level of ozone and nitrogen dioxide when the temperature is greater than or equal to 30 C based on 15 years of biased data are obtained. Full article
(This article belongs to the Special Issue Big Data Analytics for Smart Cities)
Show Figures

Figure 1

19 pages, 23557 KiB  
Article
Development of a Multi-Purpose Autonomous Differential Drive Mobile Robot for Plant Phenotyping and Soil Sensing
by Jawad Iqbal, Rui Xu, Hunter Halloran and Changying Li
Electronics 2020, 9(9), 1550; https://doi.org/10.3390/electronics9091550 - 22 Sep 2020
Cited by 28 | Viewed by 10389
Abstract
To help address the global growing demand for food and fiber, selective breeding programs aim to cultivate crops with higher yields and more resistance to stress. Measuring phenotypic traits needed for breeding programs is usually done manually and is labor-intensive, subjective, and lacks [...] Read more.
To help address the global growing demand for food and fiber, selective breeding programs aim to cultivate crops with higher yields and more resistance to stress. Measuring phenotypic traits needed for breeding programs is usually done manually and is labor-intensive, subjective, and lacks adequate temporal resolution. This paper presents a Multipurpose Autonomous Robot of Intelligent Agriculture (MARIA), an open source differential drive robot that is able to navigate autonomously indoors and outdoors while conducting plant morphological trait phenotyping and soil sensing. For the design of the rover, a drive system was developed using the Robot Operating System (ROS), which allows for autonomous navigation using Global Navigation Satellite Systems (GNSS). For phenotyping, the robot was fitted with an actuated LiDAR unit and a depth camera that can estimate morphological traits of plants such as volume and height. A three degree-of-freedom manipulator mounted on the mobile platform was designed using Dynamixel servos that can perform soil sensing and sampling using off-the-shelf and 3D printed components. MARIA was able to navigate both indoors and outdoors with an RMSE of 0.0156 m and 0.2692 m, respectively. Additionally, the onboard actuated LiDAR sensor was able to estimate plant volume and height with an average error of 1.76% and 3.2%, respectively. The manipulator performance tests on soil sensing was also satisfactory. This paper presents a design for a differential drive mobile robot built from off-the-shelf components that makes it replicable and available for implementation by other researchers. The validation of this system suggests that it may be a valuable solution to address the phenotyping bottleneck by providing a system capable of navigating through crop rows or a greenhouse while conducting phenotyping and soil measurements. Full article
(This article belongs to the Special Issue Modeling, Control, and Applications of Field Robotics)
Show Figures

Figure 1

14 pages, 2446 KiB  
Article
Improving the Performance of RLizard on Memory-Constraint IoT Devices with 8-Bit ATmega MCU
by Jin-Kwan Jeon, In-Won Hwang, Hyun-Jun Lee and Younho Lee
Electronics 2020, 9(9), 1549; https://doi.org/10.3390/electronics9091549 - 22 Sep 2020
Viewed by 2256
Abstract
We propose an improved RLizard implementation method that enables the RLizard key encapsulation mechanism (KEM) to run in a resource-constrained Internet of Things (IoT) environment with an 8-bit micro controller unit (MCU) and 8–16 KB of SRAM. Existing research has shown that the [...] Read more.
We propose an improved RLizard implementation method that enables the RLizard key encapsulation mechanism (KEM) to run in a resource-constrained Internet of Things (IoT) environment with an 8-bit micro controller unit (MCU) and 8–16 KB of SRAM. Existing research has shown that the proposed method can function in a relatively high-end IoT environment, but there is a limitation when applying the existing implementation to our environment because of the insufficient SRAM space. We improve the implementation of the RLizard KEM by utilizing electrically erasable, programmable, read-only memory (EEPROM) and flash memory, which is possessed by all 8-bit ATmega MCUs. In addition, in order to prevent a decrease in execution time related to their use, we improve the multiplication process between polynomials utilizing the special property of the second multiplicand in each algorithm of the RLizard KEM. Thus, we reduce the required MCU clock cycle consumption. The results show that, compared to the existing code submitted to the National Institute of Standard and Technology (NIST) PQC standardization competition, the required MCU clock cycle is reduced by an average of 52%, and the memory used is reduced by approximately 77%. In this way, we verified that the RLizard KEM works well in our low-end IoT environments. Full article
(This article belongs to the Special Issue Data Security)
Show Figures

Figure 1

16 pages, 402 KiB  
Article
Designing a CHAM Block Cipher on Low-End Microcontrollers for Internet of Things
by Hyeokdong Kwon, SangWoo An, YoungBeom Kim, Hyunji Kim, Seung Ju Choi, Kyoungbae Jang, Jaehoon Park, Hyunjun Kim, Seog Chung Seo and Hwajeong Seo
Electronics 2020, 9(9), 1548; https://doi.org/10.3390/electronics9091548 - 22 Sep 2020
Cited by 9 | Viewed by 2910
Abstract
As the technology of Internet of Things (IoT) evolves, abundant data is generated from sensor nodes and exchanged between them. For this reason, efficient encryption is required to keep data in secret. Since low-end IoT devices have limited computation power, it is difficult [...] Read more.
As the technology of Internet of Things (IoT) evolves, abundant data is generated from sensor nodes and exchanged between them. For this reason, efficient encryption is required to keep data in secret. Since low-end IoT devices have limited computation power, it is difficult to operate expensive ciphers on them. Lightweight block ciphers reduce computation overheads, which are suitable for low-end IoT platforms. In this paper, we implemented the optimized CHAM block cipher in the counter mode of operation, on 8-bit AVR microcontrollers (i.e., representative sensor nodes). There are four new techniques applied. First, the execution time is drastically reduced, by skipping eight rounds through pre-calculation and look-up table access. Second, the encryption with a variable-key scenario is optimized with the on-the-fly table calculation. Third, the encryption in a parallel way makes multiple blocks computed in online for CHAM-64/128 case. Fourth, the state-of-art engineering technique is fully utilized in terms of the instruction level and register level. With these optimization methods, proposed optimized CHAM implementations for counter mode of operation outperformed the state-of-art implementations by 12.8%, 8.9%, and 9.6% for CHAM-64/128, CHAM-128/128, and CHAM-128/256, respectively. Full article
(This article belongs to the Special Issue Recent Advances in Cryptography and Network Security)
Show Figures

Figure 1

16 pages, 3578 KiB  
Article
Defect Detection in Printed Circuit Boards Using You-Only-Look-Once Convolutional Neural Networks
by Venkat Anil Adibhatla, Huan-Chuang Chih, Chi-Chang Hsu, Joseph Cheng, Maysam F. Abbod and Jiann-Shing Shieh
Electronics 2020, 9(9), 1547; https://doi.org/10.3390/electronics9091547 - 22 Sep 2020
Cited by 90 | Viewed by 10596
Abstract
In this study, a deep learning algorithm based on the you-only-look-once (YOLO) approach is proposed for the quality inspection of printed circuit boards (PCBs). The high accuracy and efficiency of deep learning algorithms has resulted in their increased adoption in every field. Similarly, [...] Read more.
In this study, a deep learning algorithm based on the you-only-look-once (YOLO) approach is proposed for the quality inspection of printed circuit boards (PCBs). The high accuracy and efficiency of deep learning algorithms has resulted in their increased adoption in every field. Similarly, accurate detection of defects in PCBs by using deep learning algorithms, such as convolutional neural networks (CNNs), has garnered considerable attention. In the proposed method, highly skilled quality inspection engineers first use an interface to record and label defective PCBs. The data are then used to train a YOLO/CNN model to detect defects in PCBs. In this study, 11,000 images and a network of 24 convolutional layers and 2 fully connected layers were used. The proposed model achieved a defect detection accuracy of 98.79% in PCBs with a batch size of 32. Full article
(This article belongs to the Special Issue Convolutional Neural Networks and Vision Applications)
Show Figures

Figure 1

24 pages, 11810 KiB  
Article
State of Charge Estimation in Lithium-Ion Batteries: A Neural Network Optimization Approach
by M. S. Hossain Lipu, M. A. Hannan, Aini Hussain, Afida Ayob, Mohamad H. M. Saad and Kashem M. Muttaqi
Electronics 2020, 9(9), 1546; https://doi.org/10.3390/electronics9091546 - 22 Sep 2020
Cited by 48 | Viewed by 4342
Abstract
The development of an accurate and robust state-of-charge (SOC) estimation is crucial for the battery lifetime, efficiency, charge control, and safe driving of electric vehicles (EV). This paper proposes an enhanced data-driven method based on a time-delay neural network (TDNN) algorithm for state [...] Read more.
The development of an accurate and robust state-of-charge (SOC) estimation is crucial for the battery lifetime, efficiency, charge control, and safe driving of electric vehicles (EV). This paper proposes an enhanced data-driven method based on a time-delay neural network (TDNN) algorithm for state of charge (SOC) estimation in lithium-ion batteries. Nevertheless, SOC accuracy is subject to the suitable value of the hyperparameters selection of the TDNN algorithm. Hence, the TDNN algorithm is optimized by the improved firefly algorithm (iFA) to determine the optimal number of input time delay (UTD) and hidden neurons (HNs). This work investigates the performance of lithium nickel manganese cobalt oxide (LiNiMnCoO2) and lithium nickel cobalt aluminum oxide (LiNiCoAlO2) toward SOC estimation under two experimental test conditions: the static discharge test (SDT) and hybrid pulse power characterization (HPPC) test. Also, the accuracy of the proposed method is evaluated under different EV drive cycles and temperature settings. The results show that iFA-based TDNN achieves precise SOC estimation results with a root mean square error (RMSE) below 1%. Besides, the effectiveness and robustness of the proposed approach are validated against uncertainties including noise impacts and aging influences. Full article
Show Figures

Figure 1

16 pages, 4297 KiB  
Article
Comparing Video Activity Classifiers within a Novel Framework
by Chiman Kwan, Bence Budavari and Bulent Ayhan
Electronics 2020, 9(9), 1545; https://doi.org/10.3390/electronics9091545 - 21 Sep 2020
Cited by 2 | Viewed by 1725
Abstract
Video activity classification has many applications. It is challenging because of the diverse characteristics of different events. In this paper, we examined different approaches to event classification within a general framework for video activity detection and classification. In our experiments, we focused on [...] Read more.
Video activity classification has many applications. It is challenging because of the diverse characteristics of different events. In this paper, we examined different approaches to event classification within a general framework for video activity detection and classification. In our experiments, we focused on event classification in which we explored a deep learning-based approach, a rule-based approach, and a hybrid combination of the previous two approaches. Experimental results using the well-known Video Image Retrieval and Analysis Tool (VIRAT) database showed that the proposed classification approaches within the framework are promising and more research is needed in this area Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

15 pages, 4651 KiB  
Article
Modeling and Analysis of the Fractional-Order Flyback Converter in Continuous Conduction Mode by Caputo Fractional Calculus
by Chen Yang, Fan Xie, Yanfeng Chen, Wenxun Xiao and Bo Zhang
Electronics 2020, 9(9), 1544; https://doi.org/10.3390/electronics9091544 - 21 Sep 2020
Cited by 13 | Viewed by 2525
Abstract
In order to obtain more realistic characteristics of the converter, a fractional-order inductor and capacitor are used in the modeling of power electronic converters. However, few researches focus on power electronic converters with a fractional-order mutual inductance. This paper introduces a fractional-order flyback [...] Read more.
In order to obtain more realistic characteristics of the converter, a fractional-order inductor and capacitor are used in the modeling of power electronic converters. However, few researches focus on power electronic converters with a fractional-order mutual inductance. This paper introduces a fractional-order flyback converter with a fractional-order mutual inductance and a fractional-order capacitor. The equivalent circuit model of the fractional-order mutual inductance is derived. Then, the state-space average model of the fractional-order flyback converter in continuous conduction mode (CCM) are established. Moreover, direct current (DC) analysis and alternating current (AC) analysis are performed under the Caputo fractional definition. Theoretical analysis shows that the orders have an important influence on the ripple, the CCM operating condition and transfer functions. Finally, the results of circuit simulation and numerical calculation are compared to verify the correctness of the theoretical analysis and the validity of the model. The simulation results show that the fractional-order flyback converter exhibits smaller overshoot, shorter setting time and higher design freedom compared with the integer-order flyback converter. Full article
(This article belongs to the Special Issue Fractional-Order Circuits & Systems Design and Applications)
Show Figures

Figure 1

30 pages, 13517 KiB  
Article
Backstepping Based Super-Twisting Sliding Mode MPPT Control with Differential Flatness Oriented Observer Design for Photovoltaic System
by Rashid Khan, Laiq Khan, Shafaat Ullah, Irfan Sami and Jong-Suk Ro
Electronics 2020, 9(9), 1543; https://doi.org/10.3390/electronics9091543 - 21 Sep 2020
Cited by 21 | Viewed by 3258
Abstract
The formulation of a maximum power point tracking (MPPT) control strategy plays a vital role in enhancing the inherent low conversion efficiency of a photovoltaic (PV) module. Keeping in view the nonlinear electrical characteristics of the PV module as well as the power [...] Read more.
The formulation of a maximum power point tracking (MPPT) control strategy plays a vital role in enhancing the inherent low conversion efficiency of a photovoltaic (PV) module. Keeping in view the nonlinear electrical characteristics of the PV module as well as the power electronic interface, in this paper, a hybrid nonlinear sensorless observer based robust backstepping super-twisting sliding mode control (BSTSMC) MPPT strategy is formulated to optimize the electric power extraction from a standalone PV array, connected to a resistive load through a non-inverting DC–DC buck-boost power converter. The reference peak power voltage is generated via the Gaussian process regression (GPR) based probabilistic machine learning approach that is adequately tracked by the proposed MPPT scheme. A generalized super-twisting algorithm (GSTA) based differential flatness approach (DFA) is used to retrieve all the missing system states. The Lyapunov stability theory is used for guaranteeing the stability of the proposed closed-loop MPPT technique. The Matlab/Simulink platform is used for simulation, testing and performance validation of the proposed MPPT strategy under different weather conditions. Its MPPT performance is further compared with the recently proposed benchmark backstepping based MPPT control strategy and the conventional MPPT strategies, namely, sliding mode control (SMC), proportional integral derivative (PID) control and the perturb-and-observe (P&O) algorithm. The proposed technique is found to have a superior tracking performance in terms of offering a fast dynamic response, finite-time convergence, minute chattering, higher tracking accuracy and having more robustness against plant parametric uncertainties, load disturbances and certain time-varying sinusoidal faults occurring in the system. Full article
(This article belongs to the Section Power Electronics)
Show Figures

Figure 1

15 pages, 6329 KiB  
Article
Predictive Torque Control Based on Discrete Space Vector Modulation of PMSM without Flux Error-Sign and Voltage-Vector Lookup Table
by Ibrahim Mohd Alsofyani and Kyo-Beum Lee
Electronics 2020, 9(9), 1542; https://doi.org/10.3390/electronics9091542 - 21 Sep 2020
Cited by 10 | Viewed by 2496
Abstract
The conventional finite set–predictive torque control of permanent magnet synchronous motors (PMSMs) suffers from large flux and torque ripples, as well as high current harmonic distortions. Introducing the discrete space vector modulation (DSVM) into the predictive torque control (PTC-DSVM) can improve its steady-state [...] Read more.
The conventional finite set–predictive torque control of permanent magnet synchronous motors (PMSMs) suffers from large flux and torque ripples, as well as high current harmonic distortions. Introducing the discrete space vector modulation (DSVM) into the predictive torque control (PTC-DSVM) can improve its steady-state performance; however, the control complexity is further increased owing to the large voltage–vector lookup table that increases the burden of memory. A simplified PTC-DSVM with 73 synthesized voltage vectors (VVs) is proposed herein, for further improving the steady-state performance of the PMSM drives with a significantly lower complexity and without requiring a VV lookup table. The proposed scheme for reducing the computation burden is designed to select an optimal zone of space vector diagram (SVD) in the utilized DSVM based on the torque demand. Hence, only 10 out of 73 admissible VVs will be initiated online upon the optimal SVD zone selection. Additionally, with the proposed algorithm, no flux error is required to control the flux demand. The proposed PTC-DSVM exhibits high performance features, such as low complexity with less memory utilization, reduced torque and flux ripples, and less redundant VVs in the prediction process. The simulation and experimental results for the 11 kW PMSM drive are presented to prove the effectiveness of the proposed control strategy. Full article
(This article belongs to the Special Issue High Power Electric Traction Systems)
Show Figures

Figure 1

12 pages, 260 KiB  
Article
A Parallel Algorithm for Matheuristics: A Comparison of Optimization Solvers
by Martín González, Jose J. López-Espín and Juan Aparicio
Electronics 2020, 9(9), 1541; https://doi.org/10.3390/electronics9091541 - 21 Sep 2020
Cited by 3 | Viewed by 2814
Abstract
Metaheuristic and exact methods are one of the most common tools to solve Mixed-Integer Optimization Problems (MIPs). Most of these problems are NP-hard problems, being intractable to obtain optimal solutions in a reasonable time when the size of the problem is huge. In [...] Read more.
Metaheuristic and exact methods are one of the most common tools to solve Mixed-Integer Optimization Problems (MIPs). Most of these problems are NP-hard problems, being intractable to obtain optimal solutions in a reasonable time when the size of the problem is huge. In this paper, a hybrid parallel optimization algorithm for matheuristics is studied. In this algorithm, exact and metaheuristic methods work together to solve a Mixed Integer Linear Programming (MILP) problem which is divided into two different subproblems, one of which is linear (and easier to solve by exact methods) and the other discrete (and is solved using metaheuristic methods). Even so, solving this problem has a high computational cost. The algorithm proposed follows an efficient decomposition which is based on the nature of the decision variables (continuous versus discrete). Because of the high cost of the algorithm, as this kind of problem belongs to NP-hard problems, parallelism techniques have been incorporated at different levels to reduce the computing cost. The matheuristic has been optimized both at the level of the problem division and internally. This configuration offers the opportunity to improve the computational time and the fitness function. The paper also focuses on the performance of different optimization software packages working in parallel. In particular, a comparison of two well-known optimization software packages (CPLEX and GUROBI) is performed when they work executing several simultaneous instances, solving various problems at the same time. Thus, this paper proposes and studies a two-level parallel algorithm based on message-passing (MPI) and shared memory (Open MP) schemes where the two subproblems are considered and where the linear problem is solved by using and studying optimization software packages (CPLEX and GUROBI). Experiments have also been carried out to ascertain the performance of the application using different programming paradigms (shared memory and distributed memory). Full article
(This article belongs to the Special Issue High-Performance Computer Architectures and Applications)
Show Figures

Figure 1

13 pages, 5546 KiB  
Article
Monolithic Integrated High Frequency GaN DC-DC Buck Converters with High Power Density Controlled by Current Mode Logic Level Signal
by Longkun Lai, Ronghua Zhang, Kui Cheng, Zhiying Xia, Chun Wei, Ke Wei, Weijun Luo and Xinyu Liu
Electronics 2020, 9(9), 1540; https://doi.org/10.3390/electronics9091540 - 21 Sep 2020
Cited by 7 | Viewed by 2830
Abstract
Integration is a key way to improve the switching frequency and power density for a DC-DC converter. A monolithic integrated GaN based DC-DC buck converter is realized by using a gate driver and a half-bridge power stage. The gate driver is composed of [...] Read more.
Integration is a key way to improve the switching frequency and power density for a DC-DC converter. A monolithic integrated GaN based DC-DC buck converter is realized by using a gate driver and a half-bridge power stage. The gate driver is composed of three stages (amplitude amplifier stage, level shifting stage and resistive-load amplifier stage) to amplify and modulate the driver control signal, i.e., CML (current mode logic) level of which the swing is from 1.1 to 1.8 V meaning that there is no need for an additional buffer or preamplifier for the control signal. The gate driver can provide sufficient driving capability for the power stage and improve the power density efficiently. The proposed GaN based DC-DC buck converter is implemented in the 0.25 μm depletion mode GaN-on-SiC process with a chip area of 1.7 mm × 1.3 mm, which is capable of operating at high switching frequency up to 200 MHz and possesses high power density up to 1 W/mm2 at 15 V output voltage. To the authors’ knowledge, this is the highest power density for GaN based DC-DC converter at the hundreds of megahertz range. Full article
(This article belongs to the Section Power Electronics)
Show Figures

Figure 1

20 pages, 799 KiB  
Article
Heterogeneous Cooperative Bare-Bones Particle Swarm Optimization with Jump for High-Dimensional Problems
by Joonwoo Lee and Won Kim
Electronics 2020, 9(9), 1539; https://doi.org/10.3390/electronics9091539 - 21 Sep 2020
Viewed by 1873
Abstract
This paper proposes a novel Bare-Bones Particle Swarm Optimization (BBPSO) algorithm for solving high-dimensional problems. BBPSO is a variant of Particle Swarm Optimization (PSO) and is based on a Gaussian distribution. The BBPSO algorithm does not consider the selection of controllable parameters for [...] Read more.
This paper proposes a novel Bare-Bones Particle Swarm Optimization (BBPSO) algorithm for solving high-dimensional problems. BBPSO is a variant of Particle Swarm Optimization (PSO) and is based on a Gaussian distribution. The BBPSO algorithm does not consider the selection of controllable parameters for PSO and is a simple but powerful optimization method. This algorithm, however, is vulnerable to high-dimensional problems, i.e., it easily becomes stuck at local optima and is subject to the “two steps forward, one step backward” phenomenon. This study improves its performance for high-dimensional problems by combining heterogeneous cooperation based on the exchange of information between particles to overcome the “two steps forward, one step backward” phenomenon and a jumping strategy to avoid local optima. The CEC 2010 Special Session on Large-Scale Global Optimization (LSGO) identified 20 benchmark problems that provide convenience and flexibility for comparing various optimization algorithms specifically designed for LSGO. Simulations are performed using these benchmark problems to verify the performance of the proposed optimizer by comparing the results of other variants of the PSO algorithm. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

18 pages, 1002 KiB  
Article
5G Cellular Networks: Coverage Analysis in the Presence of Inter-Cell Interference and Intentional Jammers
by Muhammad Qasim, Muhammad Sajid Haroon, Muhammad Imran, Fazal Muhammad and Sunghwan Kim
Electronics 2020, 9(9), 1538; https://doi.org/10.3390/electronics9091538 - 20 Sep 2020
Cited by 8 | Viewed by 2650
Abstract
Intentional jammers (IJs) can be used by attackers for the launching of distributed denial-of-service attacks in 5G cellular networks. These adversaries are assumed to have adequate information about the network specifications, such as duration, transmit power and positions. With these assumptions, the IJs [...] Read more.
Intentional jammers (IJs) can be used by attackers for the launching of distributed denial-of-service attacks in 5G cellular networks. These adversaries are assumed to have adequate information about the network specifications, such as duration, transmit power and positions. With these assumptions, the IJs gain the ability to disrupt the legitimate communication of the network. Heterogeneous cellular networks (HetNets) can be considered a vital enabler for 5G cellular networks. Small base stations (SBSs) are deployed inside macro base station (MBS) to improve spectral efficiency and capacity. Due to orthogonal frequency division multiplexing assumption, HetNets’ performance is mainly limited by inter-cell interference (ICI). Additionally, there exist IJs-interference (IJs-I), which significantly degrades the network coverage depending on the IJs’ transmit power levels and their proximity with the target. The proposed work explores the uplink (UL) coverage performance of HetNets in the presence of both IJs-I and ICI. Moreover, to reduce the effects of ICI and IJs-I, reverse frequency allocation (RFA) is employed which is a proactive interference abating scheme. In RFA, different sub-bands of the available spectrum are used by MBS and SBS in alternate regions. The proposed setup is evaluated both analytically as well as with the help of simulation. The results demonstrate considerable UL coverage performance improvement by effectively mitigating IJs-I and ICI. Full article
(This article belongs to the Section Networks)
Show Figures

Figure 1

18 pages, 3046 KiB  
Article
Takagi–Sugeno Observer Design for Remaining Useful Life Estimation of Li-Ion Battery System Under Faults
by Norbert Kukurowski, Marcin Pazera and Marcin Witczak
Electronics 2020, 9(9), 1537; https://doi.org/10.3390/electronics9091537 - 20 Sep 2020
Cited by 4 | Viewed by 2084
Abstract
Among the existing estimation schemes of a battery state of charge, most deal with an assumption that the faults will never occur in the system. Nevertheless, faults may have a crucial impact on the state of charge estimation accuracy. The paper proposes a [...] Read more.
Among the existing estimation schemes of a battery state of charge, most deal with an assumption that the faults will never occur in the system. Nevertheless, faults may have a crucial impact on the state of charge estimation accuracy. The paper proposes a novel observer design to estimate the state of charge and the remaining useful life of a Li-ion battery system under voltage and current measurement faults. The approach starts with converting the battery system into the descriptor Takagi–Sugeno form, where the state includes the original states along with the voltage and current measurement faults. Moreover, external disturbances are bounded by an ellipsoid based on the so-called Quadratic Boundedness approach, which ensures the system stability. The second-order Resistor-Capacitor equivalent circuit model is considered to verify the performance and correctness of the proposed observer. Subsequently, a real battery model is designed with experimental data of the Li-ion 18650 battery delivered from the NASA benchmark. Another experiment deals with an automated guided vehicle fed with a battery of which the remaining useful life is estimated. Finally, the results are compared with another estimation scheme based on the same benchmark. Full article
(This article belongs to the Section Systems & Control Engineering)
Show Figures

Figure 1

20 pages, 12781 KiB  
Article
FASSD: A Feature Fusion and Spatial Attention-Based Single Shot Detector for Small Object Detection
by Deng Jiang, Bei Sun, Shaojing Su, Zhen Zuo, Peng Wu and Xiaopeng Tan
Electronics 2020, 9(9), 1536; https://doi.org/10.3390/electronics9091536 - 19 Sep 2020
Cited by 13 | Viewed by 3755
Abstract
Deep learning methods have significantly improved object detection performance, but small object detection remains an extremely difficult and challenging task in computer vision. We propose a feature fusion and spatial attention-based single shot detector (FASSD) for small object detection. We fuse high-level semantic [...] Read more.
Deep learning methods have significantly improved object detection performance, but small object detection remains an extremely difficult and challenging task in computer vision. We propose a feature fusion and spatial attention-based single shot detector (FASSD) for small object detection. We fuse high-level semantic information into shallow layers to generate discriminative feature representations for small objects. To adaptively enhance the expression of small object areas and suppress the feature response of background regions, the spatial attention block learns a self-attention mask to enhance the original feature maps. We also establish a small object dataset (LAKE-BOAT) of a scene with a boat on a lake and tested our algorithm to evaluate its performance. The results show that our FASSD achieves 79.3% mAP (mean average precision) on the PASCAL VOC2007 test with input 300 × 300, which outperforms the original single shot multibox detector (SSD) by 1.6 points, as well as most improved algorithms based on SSD. The corresponding detection speed was 45.3 FPS (frame per second) on the VOC2007 test using a single NVIDIA TITAN RTX GPU. The test results of a simplified FASSD on the LAKE-BOAT dataset indicate that our model achieved an improvement of 3.5% mAP on the baseline network while maintaining a real-time detection speed (64.4 FPS). Full article
(This article belongs to the Special Issue Deep Learning Based Object Detection)
Show Figures

Figure 1

23 pages, 6515 KiB  
Article
Open-Circuit Fault Tolerance Method for Three-Level Hybrid Active Neutral Point Clamped Converters
by Laith M. Halabi, Ibrahim Mohd Alsofyani and Kyo-Beum Lee
Electronics 2020, 9(9), 1535; https://doi.org/10.3390/electronics9091535 - 19 Sep 2020
Cited by 10 | Viewed by 2286
Abstract
Three-level converters are the most important technologies used in high power applications. Among these technologies, active neutral point clamped (ANPC) converters are mainly used for industrial applications. Meanwhile, recent developments have reduced losses and increased efficiency by using a hybrid combination of Si-IGBT [...] Read more.
Three-level converters are the most important technologies used in high power applications. Among these technologies, active neutral point clamped (ANPC) converters are mainly used for industrial applications. Meanwhile, recent developments have reduced losses and increased efficiency by using a hybrid combination of Si-IGBT and SiC-MOSFET switches to achieve hybrid ANPC (HANPC) converters. Open-circuit failure is regarded as a common and serious problem that affects the operational performance. In this paper, an effective fault-tolerant method is proposed for HANPC converters to safely re-utilize normal operation and increase the reliability of the system under fault conditions. Sequentially, regarding different topologies with reference to earlier fault tolerance methods which could not be applied to the HANPC, the proposed strategy enables continuous operation under faulty conditions effectively without using any additional devices by creating new voltage references, voltage offset, and switching sequences under the faulty conditions. Consequently, no additional costs or changes are associated with the inverter. A detailed analysis of the proposed strategy is presented highlighting the effects on the voltage, currents, and the corresponding total harmonic distortion (THD). The simulation and experimental results demonstrate the capability and effectiveness of the proposed method to maintain normal operation and eliminate the output distortion. Full article
(This article belongs to the Special Issue High Power Electric Traction Systems)
Show Figures

Figure 1

5 pages, 186 KiB  
Editorial
Industrial Applications of Power Electronics
by Eduardo M. G. Rodrigues, Radu Godina and Edris Pouresmaeil
Electronics 2020, 9(9), 1534; https://doi.org/10.3390/electronics9091534 - 19 Sep 2020
Cited by 12 | Viewed by 4872
Abstract
Electronic applications use a wide variety of materials, knowledge, and devices, which pave the road to creative design, development, and the creation of countless electronic circuits with the purpose of incorporating them in electronic products. Therefore, power electronics have been fully introduced in [...] Read more.
Electronic applications use a wide variety of materials, knowledge, and devices, which pave the road to creative design, development, and the creation of countless electronic circuits with the purpose of incorporating them in electronic products. Therefore, power electronics have been fully introduced in industry, in applications such as power supplies, converters, inverters, battery chargers, temperature control, variable speed motors, by studying the effects and the adaptation of electronic power systems to industrial processes. Recently, the role of power electronics has been gaining special significance regarding energy conservation and environmental control. The reality is that the demand for electrical energy grows in a directly proportional manner with the improvement in quality of life. Consequently, the design, development, and optimization of power electronics and controller devices are essential to face forthcoming challenges. In this Special Issue, 19 selected and peer-reviewed papers discussing a wide range of topics contribute to addressing a wide variety of themes, such as motor drives, AC-DC and DC-DC converters, electromagnetic compatibility and multilevel converters. Full article
(This article belongs to the Special Issue Industrial Applications of Power Electronics)
18 pages, 997 KiB  
Article
DeepIDS: Deep Learning Approach for Intrusion Detection in Software Defined Networking
by Tuan Anh Tang, Lotfi Mhamdi, Des McLernon, Syed Ali Raza Zaidi, Mounir Ghogho and Fadi El Moussa
Electronics 2020, 9(9), 1533; https://doi.org/10.3390/electronics9091533 - 19 Sep 2020
Cited by 43 | Viewed by 5446
Abstract
Software Defined Networking (SDN) is developing as a new solution for the development and innovation of the Internet. SDN is expected to be the ideal future for the Internet, since it can provide a controllable, dynamic, and cost-effective network. The emergence of SDN [...] Read more.
Software Defined Networking (SDN) is developing as a new solution for the development and innovation of the Internet. SDN is expected to be the ideal future for the Internet, since it can provide a controllable, dynamic, and cost-effective network. The emergence of SDN provides a unique opportunity to achieve network security in a more efficient and flexible manner. However, SDN also has original structural vulnerabilities, which are the centralized controller, the control-data interface and the control-application interface. These vulnerabilities can be exploited by intruders to conduct several types of attacks. In this paper, we propose a deep learning (DL) approach for a network intrusion detection system (DeepIDS) in the SDN architecture. Our models are trained and tested with the NSL-KDD dataset and achieved an accuracy of 80.7% and 90% for a Fully Connected Deep Neural Network (DNN) and a Gated Recurrent Neural Network (GRU-RNN), respectively. Through experiments, we confirm that the DL approach has the potential for flow-based anomaly detection in the SDN environment. We also evaluate the performance of our system in terms of throughput, latency, and resource utilization. Our test results show that DeepIDS does not affect the performance of the OpenFlow controller and so is a feasible approach. Full article
(This article belongs to the Special Issue Enabling Technologies for Internet of Things)
Show Figures

Figure 1

12 pages, 6734 KiB  
Article
Compact Dual-Band Rectenna Based on Dual-Mode Metal-Rimmed Antenna
by Ha Vu Ngoc Anh, Nguyen Minh Thien, Le Huy Trinh, Truong Nguyen Vu and Fabien Ferrero
Electronics 2020, 9(9), 1532; https://doi.org/10.3390/electronics9091532 - 18 Sep 2020
Cited by 3 | Viewed by 2757
Abstract
This paper proposes the design of a dual-band integrated rectenna. The rectenna has compact size of 0.4 × 0.3 × 0.25 cm3 and operates at 925 MHz and 2450 MHz bands. In general, the rectenna consists of two main parts, the metal-rimmed [...] Read more.
This paper proposes the design of a dual-band integrated rectenna. The rectenna has compact size of 0.4 × 0.3 × 0.25 cm3 and operates at 925 MHz and 2450 MHz bands. In general, the rectenna consists of two main parts, the metal-rimmed dual-band antenna used for harvesting the radio frequency (RF) signals from the environment and the rectifier circuit to convert these receiving powers to the direct current (DC). Because of the dual resonant structure of the antenna, the rectifier circuit can be optimized in terms of size and the frequency bandwidth, while the conversion efficiencies are always obtained 60% at the RF input power −2.5 dBm and −1 dBm for the lower band and the higher band, respectively. Measured results show that the metal-rimmed antenna exhibits −10 dB reflection coefficient in both desired frequency bands. Moreover, the antenna achieves 47% and 89% of total efficiency respectively at 925 MHz and 2450 MHz, which confirms that the proposed rectenna is well applicable in most of the miniaturized wireless sensor networks and IoT systems. Full article
(This article belongs to the Special Issue Design and Measurement of Integrated Antenna)
Show Figures

Figure 1

21 pages, 15215 KiB  
Article
Multi-Sensor Image Fusion Using Optimized Support Vector Machine and Multiscale Weighted Principal Component Analysis
by Shanshan Huang, Yikun Yang, Xin Jin, Ya Zhang, Qian Jiang and Shaowen Yao
Electronics 2020, 9(9), 1531; https://doi.org/10.3390/electronics9091531 - 18 Sep 2020
Cited by 7 | Viewed by 2616
Abstract
Multi-sensor image fusion is used to combine the complementary information of source images from the multiple sensors. Recently, conventional image fusion schemes based on signal processing techniques have been studied extensively, and machine learning-based techniques have been introduced into image fusion because of [...] Read more.
Multi-sensor image fusion is used to combine the complementary information of source images from the multiple sensors. Recently, conventional image fusion schemes based on signal processing techniques have been studied extensively, and machine learning-based techniques have been introduced into image fusion because of the prominent advantages. In this work, a new multi-sensor image fusion method based on the support vector machine and principal component analysis is proposed. First, the key features of the source images are extracted by combining the sliding window technique and five effective evaluation indicators. Second, a trained support vector machine model is used to extract the focus region and the non-focus region of the source images according to the extracted image features, the fusion decision is therefore obtained for each source image. Then, the consistency verification operation is used to absorb a single singular point in the decisions of the trained classifier. Finally, a novel method based on principal component analysis and the multi-scale sliding window is proposed to handle the disputed areas in the fusion decision pair. Experiments are performed to verify the performance of the new combined method. Full article
(This article belongs to the Special Issue Pattern Recognition and Applications)
Show Figures

Figure 1

20 pages, 6219 KiB  
Article
Evaluating the Factors Affecting QoE of 360-Degree Videos and Cybersickness Levels Predictions in Virtual Reality
by Muhammad Shahid Anwar, Jing Wang, Sadique Ahmad, Asad Ullah, Wahab Khan and Zesong Fei
Electronics 2020, 9(9), 1530; https://doi.org/10.3390/electronics9091530 - 18 Sep 2020
Cited by 23 | Viewed by 3516
Abstract
360-degree Virtual Reality (VR) videos have already taken up viewers’ attention by storm. Despite the immense attractiveness and hype, VR conveys a loathsome side effect called “cybersickness” that often creates significant discomfort to the viewers. It is of great importance to evaluate the [...] Read more.
360-degree Virtual Reality (VR) videos have already taken up viewers’ attention by storm. Despite the immense attractiveness and hype, VR conveys a loathsome side effect called “cybersickness” that often creates significant discomfort to the viewers. It is of great importance to evaluate the factors that induce cybersickness symptoms and its deterioration on the end user’s Quality-of-Experience (QoE) when visualizing 360-degree videos in VR. This manuscript’s intent is to subjectively investigate factors of high priority that affect a user’s QoE in terms of perceptual quality, presence, and cybersickness. The content type (fast, medium, and slow), the effect of camera motion (fixed, horizontal, and vertical), and the number of moving targets (none, single, and multiple) in a video can be the factors that may affect the QoE. The significant effect of such factors on end-user QoE under various stalling events (none, single, and multiple) is evaluated in a subjective experiment. The results from subjective experiments show a notable impact of these factors on end-user QoE. Finally, to label the viewing safety concern in VR, we propose a neural network-based QoE prediction method that can predict the degree of cybersickness influenced by 360-degree videos under various stalling events in VR. The performance accuracy of the proposed method is then compared against well-known Machine Learning (ML) algorithms and existing QoE prediction models. The proposed method achieved a 90% prediction accuracy rate and performed well against existing models and other ML methods. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

19 pages, 5448 KiB  
Article
Efficient Approximate Adders for FPGA-Based Data-Paths
by Stefania Perri, Fanny Spagnolo, Fabio Frustaci and Pasquale Corsonello
Electronics 2020, 9(9), 1529; https://doi.org/10.3390/electronics9091529 - 18 Sep 2020
Cited by 14 | Viewed by 4215
Abstract
Approximate computing represents a powerful technique to reduce energy consumption and computational delay in error-resilient applications, such as multimedia processing, machine learning, and many others. In these contexts, designing efficient digital data-paths is a crucial concern. For this reason, the addition operation has [...] Read more.
Approximate computing represents a powerful technique to reduce energy consumption and computational delay in error-resilient applications, such as multimedia processing, machine learning, and many others. In these contexts, designing efficient digital data-paths is a crucial concern. For this reason, the addition operation has received a great deal of attention. However, most of the approximate adders proposed in the literature are oriented to Application Specific Integrated Circuits (ASICs), and their deployment on different devices, such as Field Programmable Gate Arrays (FPGAs), appears to be unfeasible (or at least ineffective). This paper presents a novel approximate addition technique thought to efficiently exploit the configurable resources available within an FPGA device. The proposed approximation strategy sums the k least significant bits two-by-two by using 4-input Look-up-Tables (LUTs), each performing a precise 2-bit addition with the zeroed carry-in. In comparison with several FPGA-based approximate adders in the existing literature, the novel adder achieves markedly improved error characteristics without compromising either the power consumption or the delay. As an example, when implemented within the Artix-7 xc7a100tcsg324-3 chip, the 32-bit adder designed as proposed here with k = 8 performs as fast as its competitors and reduces the Mean Error Distance (MED) by up to 72% over the state-of-the-art approximate adders, with an energy penalty of just 8% in the worst scenario. The integration of the new approximate adder within a more complex application, such as the 2D digital image filtering, has shown even better results. In such a case, the MED is reduced by up to 97% with respect to the FPGA-based counterparts proposed in the literature. Full article
(This article belongs to the Special Issue Circuits and Systems for Approximate Computing)
Show Figures

Figure 1

13 pages, 3560 KiB  
Article
ACSiamRPN: Adaptive Context Sampling for Visual Object Tracking
by Xiaofei Qin, Yipeng Zhang, Hang Chang, Hao Lu and Xuedian Zhang
Electronics 2020, 9(9), 1528; https://doi.org/10.3390/electronics9091528 - 18 Sep 2020
Cited by 3 | Viewed by 2140
Abstract
In visual object tracking fields, the Siamese network tracker, based on the region proposal network (SiamRPN), has achieved promising tracking effects, both in speed and accuracy. However, it did not consider the relationship and differences between the long-range context information of various objects. [...] Read more.
In visual object tracking fields, the Siamese network tracker, based on the region proposal network (SiamRPN), has achieved promising tracking effects, both in speed and accuracy. However, it did not consider the relationship and differences between the long-range context information of various objects. In this paper, we add a global context block (GC block), which is lightweight and can effectively model long-range dependency, to the Siamese network part of SiamRPN so that the object tracker can better understand the tracking scene. At the same time, we propose a novel convolution module, called a cropping-inside selective kernel block (CiSK block), based on selective kernel convolution (SK convolution, a module proposed in selective kernel networks) and use it in the region proposal network (RPN) part of SiamRPN, which can adaptively adjust the size of the receptive field for different types of objects. We make two improvements to SK convolution in the CiSK block. The first improvement is that in the fusion step of SK convolution, we use both global average pooling (GAP) and global maximum pooling (GMP) to enhance global information embedding. The second improvement is that after the selection step of SK convolution, we crop out the outermost pixels of features to reduce the impact of padding operations. The experiment results show that on the OTB100 benchmark, we achieved an accuracy of 0.857 and a success rate of 0.643. On the VOT2016 and VOT2019 benchmarks, we achieved expected average overlap (EAO) scores of 0.394 and 0.240, respectively. Full article
(This article belongs to the Special Issue Visual Object Tracking: Challenges and Applications)
Show Figures

Figure 1

21 pages, 3490 KiB  
Article
A New Text Classification Model Based on Contrastive Word Embedding for Detecting Cybersecurity Intelligence in Twitter
by Han-Sub Shin, Hyuk-Yoon Kwon and Seung-Jin Ryu
Electronics 2020, 9(9), 1527; https://doi.org/10.3390/electronics9091527 - 18 Sep 2020
Cited by 22 | Viewed by 4804
Abstract
Detecting cybersecurity intelligence (CSI) on social media such as Twitter is crucial because it allows security experts to respond cyber threats in advance. In this paper, we devise a new text classification model based on deep learning to classify CSI-positive and -negative tweets [...] Read more.
Detecting cybersecurity intelligence (CSI) on social media such as Twitter is crucial because it allows security experts to respond cyber threats in advance. In this paper, we devise a new text classification model based on deep learning to classify CSI-positive and -negative tweets from a collection of tweets. For this, we propose a novel word embedding model, called contrastive word embedding, that enables to maximize the difference between base embedding models. First, we define CSI-positive and -negative corpora, which are used for constructing embedding models. Here, to supplement the imbalance of tweet data sets, we additionally employ the background knowledge for each tweet corpus: (1) CVE data set for CSI-positive corpus and (2) Wikitext data set for CSI-negative corpus. Second, we adopt the deep learning models such as CNN or LSTM to extract adequate feature vectors from the embedding models and integrate the feature vectors into one classifier. To validate the effectiveness of the proposed model, we compare our method with two baseline classification models: (1) a model based on a single embedding model constructed with CSI-positive corpus only and (2) another model with CSI-negative corpus only. As a result, we indicate that the proposed model shows high accuracy, i.e., 0.934 of F1-score and 0.935 of area under the curve (AUC), which improves the baseline models by 1.76∼6.74% of F1-score and by 1.64∼6.98% of AUC. Full article
(This article belongs to the Special Issue Data Security)
Show Figures

Figure 1

10 pages, 1608 KiB  
Article
A Neural Network Decomposition Algorithm for Mapping on Crossbar-Based Computing Systems
by Choongmin Kim, Jacob A. Abraham, Woochul Kang and Jaeyong Chung
Electronics 2020, 9(9), 1526; https://doi.org/10.3390/electronics9091526 - 18 Sep 2020
Cited by 2 | Viewed by 2279
Abstract
Crossbar-based neuromorphic computing to accelerate neural networks is a popular alternative to conventional von Neumann computing systems. It is also referred as processing-in-memory and in-situ analog computing. The crossbars have a fixed number of synapses per neuron and it is necessary to decompose [...] Read more.
Crossbar-based neuromorphic computing to accelerate neural networks is a popular alternative to conventional von Neumann computing systems. It is also referred as processing-in-memory and in-situ analog computing. The crossbars have a fixed number of synapses per neuron and it is necessary to decompose neurons to map networks onto the crossbars. This paper proposes the k-spare decomposition algorithm that can trade off the predictive performance against the neuron usage during the mapping. The proposed algorithm performs a two-level hierarchical decomposition. In the first global decomposition, it decomposes the neural network such that each crossbar has k spare neurons. These neurons are used to improve the accuracy of the partially mapped network in the subsequent local decomposition. Our experimental results using modern convolutional neural networks show that the proposed method can improve the accuracy substantially within about 10% extra neurons. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop