Next Issue
Volume 11, January-2
Previous Issue
Volume 10, December-2
 
 

Electronics, Volume 11, Issue 1 (January-1 2022) – 169 articles

Cover Story (view full-size image): Reconfigurable computing provides a paradigm to create intelligent systems different from classic software computing. A few vendors have dominated this market with proprietary tools resulting in fragmented ecosystems and little interoperation. A complete FPGA toolchain has recently emerged from the open-source community. Robotics is an expanding application field that may benefit from FPGAs for developing the logic between sensors and actuators, improving their responsiveness and low power consumption. Reactive robot behaviors may be easily designed following the reconfigurable computing approach and that toolchain. Abstractions such as circuit blocks and wires are provided for building robot brains. Visual programming and block libraries make it painless, reliable, and more reusable. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
29 pages, 10828 KiB  
Review
A Road towards 6G Communication—A Review of 5G Antennas, Arrays, and Wearable Devices
by Muhammad Ikram, Kamel Sultan, Muhammad Faisal Lateef and Abdulrahman S. M. Alqadami
Electronics 2022, 11(1), 169; https://doi.org/10.3390/electronics11010169 - 05 Jan 2022
Cited by 69 | Viewed by 9814
Abstract
Next-generation communication systems and wearable technologies aim to achieve high data rates, low energy consumption, and massive connections because of the extensive increase in the number of Internet-of-Things (IoT) and wearable devices. These devices will be employed for many services such as cellular, [...] Read more.
Next-generation communication systems and wearable technologies aim to achieve high data rates, low energy consumption, and massive connections because of the extensive increase in the number of Internet-of-Things (IoT) and wearable devices. These devices will be employed for many services such as cellular, environment monitoring, telemedicine, biomedical, and smart traffic, etc. Therefore, it is challenging for the current communication devices to accommodate such a high number of services. This article summarizes the motivation and potential of the 6G communication system and discusses its key features. Afterward, the current state-of-the-art of 5G antenna technology, which includes existing 5G antennas and arrays and 5G wearable antennas, are summarized. The article also described the useful methods and techniques of exiting antenna design works that could mitigate the challenges and concerns of the emerging 5G and 6G applications. The key features and requirements of the wearable antennas for next-generation technology are also presented at the end of the paper. Full article
(This article belongs to the Special Issue Prospective Multiple Antenna Technologies for 5G and Beyond)
Show Figures

Figure 1

17 pages, 14856 KiB  
Article
Steering a Robotic Wheelchair Based on Voice Recognition System Using Convolutional Neural Networks
by Mohsen Bakouri, Mohammed Alsehaimi, Husham Farouk Ismail, Khaled Alshareef, Ali Ganoun, Abdulrahman Alqahtani and Yousef Alharbi
Electronics 2022, 11(1), 168; https://doi.org/10.3390/electronics11010168 - 05 Jan 2022
Cited by 19 | Viewed by 5309
Abstract
Many wheelchair people depend on others to control the movement of their wheelchairs, which significantly influences their independence and quality of life. Smart wheelchairs offer a degree of self-dependence and freedom to drive their own vehicles. In this work, we designed and implemented [...] Read more.
Many wheelchair people depend on others to control the movement of their wheelchairs, which significantly influences their independence and quality of life. Smart wheelchairs offer a degree of self-dependence and freedom to drive their own vehicles. In this work, we designed and implemented a low-cost software and hardware method to steer a robotic wheelchair. Moreover, from our method, we developed our own Android mobile app based on Flutter software. A convolutional neural network (CNN)-based network-in-network (NIN) structure approach integrated with a voice recognition model was also developed and configured to build the mobile app. The technique was also implemented and configured using an offline Wi-Fi network hotspot between software and hardware components. Five voice commands (yes, no, left, right, and stop) guided and controlled the wheelchair through the Raspberry Pi and DC motor drives. The overall system was evaluated based on a trained and validated English speech corpus by Arabic native speakers for isolated words to assess the performance of the Android OS application. The maneuverability performance of indoor and outdoor navigation was also evaluated in terms of accuracy. The results indicated a degree of accuracy of approximately 87.2% of the accurate prediction of some of the five voice commands. Additionally, in the real-time performance test, the root-mean-square deviation (RMSD) values between the planned and actual nodes for indoor/outdoor maneuvering were 1.721 × 10−5 and 1.743 × 10−5, respectively. Full article
(This article belongs to the Topic Advanced Systems Engineering: Theory and Applications)
Show Figures

Figure 1

17 pages, 1159 KiB  
Article
Fine Grained Access Control Based on Smart Contract for Edge Computing
by Yong Zhu, Xiao Wu and Zhihui Hu
Electronics 2022, 11(1), 167; https://doi.org/10.3390/electronics11010167 - 05 Jan 2022
Cited by 3 | Viewed by 2724
Abstract
Traditional centralized access control faces data security and privacy problems. The core server is the main target to attack. Single point of failure risk and load bottleneck are difficult to solve effectively. And the third-party data center cannot protect data owners. Traditional distributed [...] Read more.
Traditional centralized access control faces data security and privacy problems. The core server is the main target to attack. Single point of failure risk and load bottleneck are difficult to solve effectively. And the third-party data center cannot protect data owners. Traditional distributed access control faces the problem of how to effectively solve the scalability and diversified requirements of IoT (Internet of Things) applications. SCAC (Smart Contract-based Access Control) is based on ABAC (Attributes Based Access Control) and RBAC (Role Based Access Control). It can be applied to various types of nodes in different application scenarios that attributes are used as basic decision elements and authorized by role. The research objective is to combine the efficiency of service orchestration in edge computing with the security of consensus mechanism in blockchain, making full use of smart contract programmability to explore fine grained access control mode on the basis of traditional access control paradigm. By designing SSH-based interface for edge computing and blockchain access, SCAC parameters can be found and set to adjust ACLs (Access Control List) and their policies. The blockchain-edge computing combination is powerful in causing significant transformations across several industries, paving the way for new business models and novel decentralized applications. The rationality on typical process behavior of management services and data access control be verified through CPN (Color Petri Net) tools 4.0, and then data statistics on fine grained access control, decentralized scalability, and lightweight deployment can be obtained by instance running in this study. The results show that authorization takes into account both security and efficiency with the “blockchain-edge computing” combination. Full article
(This article belongs to the Special Issue Blockchain for IoT and Cyber-Physical Systems)
Show Figures

Figure 1

19 pages, 6857 KiB  
Article
Coordinated Control of Voltage Balancers for the Regulation of Unbalanced Voltage in a Multi-Node Bipolar DC Distribution Network
by Chunsheng Guo, Yuhong Wang and Jianquan Liao
Electronics 2022, 11(1), 166; https://doi.org/10.3390/electronics11010166 - 05 Jan 2022
Cited by 16 | Viewed by 1852
Abstract
In a bipolar DC distribution network, the unbalanced load resistance, line resistance and renewable energy source will cause an unbalanced current for each node of the neutral line and lead to its unbalanced voltage. This is a unique power quality problem of bipolar [...] Read more.
In a bipolar DC distribution network, the unbalanced load resistance, line resistance and renewable energy source will cause an unbalanced current for each node of the neutral line and lead to its unbalanced voltage. This is a unique power quality problem of bipolar DC distribution networks, which will increase the power loss in the network and lead to overcurrent protection of the neutral line in serious cases. A voltage balancer can be adopted to suppress the unbalanced voltage and current. However, the existing literature does not consider the consistent application of multiple voltage balancers in a multi-node bipolar DC distribution network. This paper creatively proposes a consensus control topology combining primary control and secondary control in a radial multi-node bipolar DC distribution network with voltage balancers. In this paper, the formulas for the positive and negative current and duty cycle of a bipolar DC distribution network with voltage balancers are derived, and improved voltage balancer modeling based on a consensus algorithm is built. The radial multi-node bipolar DC distribution network is established in MATLAB/Simulink. The simulation results compare the consensus control with the traditional droop control and verify the effectiveness of the new control structure with voltage balancers. Full article
Show Figures

Figure 1

22 pages, 14549 KiB  
Article
Verification in Relevant Environment of a Physics-Based Synthetic Sensor for Flow Angle Estimation
by Angelo Lerro, Piero Gili and Marco Pisani
Electronics 2022, 11(1), 165; https://doi.org/10.3390/electronics11010165 - 05 Jan 2022
Cited by 3 | Viewed by 1465
Abstract
In the area of synthetic sensors for flow angle estimation, the present work aims to describe the verification in a relevant environment of a physics-based approach using a dedicated technological demonstrator. The flow angle synthetic solution is based on a model-free, or physics-based, [...] Read more.
In the area of synthetic sensors for flow angle estimation, the present work aims to describe the verification in a relevant environment of a physics-based approach using a dedicated technological demonstrator. The flow angle synthetic solution is based on a model-free, or physics-based, scheme and, therefore, it is applicable to any flying body. The demonstrator also encompasses physical sensors that provide all the necessary inputs to the synthetic sensors to estimate the angle-of-attack and the angle-of-sideslip. The uncertainty budgets of the physical sensors are evaluated to corrupt the flight simulator data with the aim of reproducing a realistic scenario to verify the synthetic sensors. The proposed approach for the flow angle estimation is suitable for modern and future aircraft, such as drones and urban mobility air vehicles. The results presented in this work show that the proposed approach can be effective in relevant scenarios even though some limitations can arise. Full article
(This article belongs to the Special Issue Unmanned Aircraft Systems with Autonomous Navigation)
Show Figures

Figure 1

32 pages, 4160 KiB  
Review
A Survey on LoRaWAN Technology: Recent Trends, Opportunities, Simulation Tools and Future Directions
by Mukarram A. M. Almuhaya, Waheb A. Jabbar, Noorazliza Sulaiman and Suliman Abdulmalek
Electronics 2022, 11(1), 164; https://doi.org/10.3390/electronics11010164 - 05 Jan 2022
Cited by 95 | Viewed by 10911
Abstract
Low-power wide-area network (LPWAN) technologies play a pivotal role in IoT applications, owing to their capability to meet the key IoT requirements (e.g., long range, low cost, small data volumes, massive device number, and low energy consumption). Between all obtainable LPWAN technologies, long-range [...] Read more.
Low-power wide-area network (LPWAN) technologies play a pivotal role in IoT applications, owing to their capability to meet the key IoT requirements (e.g., long range, low cost, small data volumes, massive device number, and low energy consumption). Between all obtainable LPWAN technologies, long-range wide-area network (LoRaWAN) technology has attracted much interest from both industry and academia due to networking autonomous architecture and an open standard specification. This paper presents a comparative review of five selected driving LPWAN technologies, including NB-IoT, SigFox, Telensa, Ingenu (RPMA), and LoRa/LoRaWAN. The comparison shows that LoRa/LoRaWAN and SigFox surpass other technologies in terms of device lifetime, network capacity, adaptive data rate, and cost. In contrast, NB-IoT technology excels in latency and quality of service. Furthermore, we present a technical overview of LoRa/LoRaWAN technology by considering its main features, opportunities, and open issues. We also compare the most important simulation tools for investigating and analyzing LoRa/LoRaWAN network performance that has been developed recently. Then, we introduce a comparative evaluation of LoRa simulators to highlight their features. Furthermore, we classify the recent efforts to improve LoRa/LoRaWAN performance in terms of energy consumption, pure data extraction rate, network scalability, network coverage, quality of service, and security. Finally, although we focus more on LoRa/LoRaWAN issues and solutions, we introduce guidance and directions for future research on LPWAN technologies. Full article
Show Figures

Figure 1

26 pages, 41806 KiB  
Article
POSIT vs. Floating Point in Implementing IIR Notch Filter by Enhancing Radix-4 Modified Booth Multiplier
by Anwar A. Esmaeel, Sa’ed Abed, Bassam J. Mohd and Abbas A. Fairouz
Electronics 2022, 11(1), 163; https://doi.org/10.3390/electronics11010163 - 05 Jan 2022
Cited by 2 | Viewed by 2353
Abstract
The increased demand for better accuracy and precision and wider data size has strained current the floating point system and motivated the development of the POSIT system. The POSIT system supports flexible formats and tapered precision and provides equivalent accuracy with fewer bits. [...] Read more.
The increased demand for better accuracy and precision and wider data size has strained current the floating point system and motivated the development of the POSIT system. The POSIT system supports flexible formats and tapered precision and provides equivalent accuracy with fewer bits. This paper examines the POSIT and floating point systems, comparing the performance of 32-bit POSIT and 32-bit floating point systems using IIR notch filter implementation. Given that the bulk of the calculations in the filter are multiplication operations, an Enhanced Radix-4 Modified Booth Multiplier (ERMBM) is implemented to increase the calculation speed and efficiency. ERMBM enhances area, speed, power, and energy compared to the POSIT regular multiplier by 26.80%, 51.97%, 0.54%, and 52.22%, respectively, without affecting the accuracy. Moreover, the Taylor series technique is adopted to implement the division operation along with cosine arithmetic unit for POSIT numbers. After comparing POSIT with floating point, the accuracy of POSIT is 92.31%, which is better than floating point’s accuracy of 23.08%. Moreover, POSIT reduces area by 21.77% while increasing the delay. However, when the ERMBM is utilized instead of the POSIT regular multiplier in implementing the filter, POSIT outperforms floating point in all the performance metrics including area, speed, power, and energy by 35.68%, 20.66%, 31.49%, and 45.64%, respectively. Full article
(This article belongs to the Topic Advanced Systems Engineering: Theory and Applications)
Show Figures

Figure 1

19 pages, 669 KiB  
Article
Exploring the Measurement Lab Open Dataset for Internet Performance Evaluation: The German Internet Landscape
by Ralf Lübben and Nico Misfeld
Electronics 2022, 11(1), 162; https://doi.org/10.3390/electronics11010162 - 05 Jan 2022
Viewed by 1813
Abstract
The Measurement Lab (MLab) provides a large and open collection of Internet performance measurements. We make use of it to look at the state of the German Internet by a structured analysis, in which we carve out expressive results from the dataset to [...] Read more.
The Measurement Lab (MLab) provides a large and open collection of Internet performance measurements. We make use of it to look at the state of the German Internet by a structured analysis, in which we carve out expressive results from the dataset to identify busy hours and days, the impact of server locations and congestion control protocols, and compare Internet service providers. Moreover, we examine the impact of the COVID-19 lockdown in Germany. We observe that only parts of the Internet show a performance degradation at the beginning of the lockdown and that a large impact in performance depends on the network the servers are located in. Furthermore, the evolution of congestion control algorithms is reflected by performance improvements. For our analysis, we focus on the busy hours. From the end-user perspective, this time is of most interest to identify if the network can support challenging services such as video streaming or cloud gaming at these intervals. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

15 pages, 2029 KiB  
Article
Electronically Adjustable Grounded Memcapacitor Emulator Based on Single Active Component with Variable Switching Mechanism
by Predrag B. Petrović
Electronics 2022, 11(1), 161; https://doi.org/10.3390/electronics11010161 - 05 Jan 2022
Cited by 6 | Viewed by 1725
Abstract
New current mode grounded memcapacitor emulator circuits are reported in this paper, based on a single voltage differencing transconductance amplifier-VDTA and two grounded capacitors. The proposed circuits possess a single active component matching constraint, while the MOS-capacitance can be used instead of classical [...] Read more.
New current mode grounded memcapacitor emulator circuits are reported in this paper, based on a single voltage differencing transconductance amplifier-VDTA and two grounded capacitors. The proposed circuits possess a single active component matching constraint, while the MOS-capacitance can be used instead of classical capacitance in a situation involving the simulator working within a high frequency range of up to 50 MHz, thereby offering obvious benefits in terms of realization utilising an IC-integrated circuit. The proposed emulator offers a variable switching mechanism—soft and hard—as well as the possibility of generating a negative memcapacitance characteristic, depending on the value of the frequency of the input current signal and the applied capacitance. The influence of possible non-ideality and parasitic effects was analysed, in order to reduce their side effects and bring the outcome to acceptable limits through the selection of passive elements. For the verification purposes, a PSPICE simulation environment with CMOS 0.18 μm TSMC technology parameters was selected. An experimental check was performed with off-the-shelf components-IC MAX435, showing satisfactory agreement with theoretical assumptions and conclusions. Full article
(This article belongs to the Section Circuit and Signal Processing)
Show Figures

Figure 1

13 pages, 1283 KiB  
Communication
Mobility-Aware Hybrid Flow Rule Cache Scheme in Software-Defined Access Networks
by Youngjun Kim, Jinwoo Park and Yeunwoong Kyung
Electronics 2022, 11(1), 160; https://doi.org/10.3390/electronics11010160 - 05 Jan 2022
Cited by 6 | Viewed by 1773
Abstract
Due to the dynamic mobility feature, the proactive flow rule cache method has become one promising solution in software-defined networking (SDN)-based access networks to reduce the number of flow rule installation procedures between the forwarding nodes and SDN controller. However, since there is [...] Read more.
Due to the dynamic mobility feature, the proactive flow rule cache method has become one promising solution in software-defined networking (SDN)-based access networks to reduce the number of flow rule installation procedures between the forwarding nodes and SDN controller. However, since there is a flow rule cache limit for the forwarding node, an efficient flow rule cache strategy is required. To address this challenge, this paper proposes the mobility-aware hybrid flow rule cache scheme. Based on the comparison between the delay requirement of the incoming flow and the response delay of the controller, the proposed scheme decides to install the flow rule either proactively or reactively for the target candidate forwarding nodes. To find the optimal number of proactive flow rules considering the flow rule cache limits, an integer linear programming (ILP) problem is formulated and solved using the heuristic method. Extensive simulation results demonstrate that the proposed scheme outperforms the existing schemes in terms of the flow table utilization ratio, flow rule installation delay, and flow rules hit ratio under various settings. Full article
(This article belongs to the Special Issue Applied AI-Based Platform Technology and Application)
Show Figures

Figure 1

17 pages, 3483 KiB  
Article
Multiparameter Identification of Permanent Magnet Synchronous Motor Based on Model Reference Adaptive System—Simulated Annealing Particle Swarm Optimization Algorithm
by Guoyong Su, Pengyu Wang, Yongcun Guo, Gang Cheng, Shuang Wang and Dongyang Zhao
Electronics 2022, 11(1), 159; https://doi.org/10.3390/electronics11010159 - 05 Jan 2022
Cited by 13 | Viewed by 2420
Abstract
The accurate identification of permanent magnet synchronous motor (PMSM) parameters is the basis for high-performance drive control. The traditional PMSM multiparameter identification method experiences problems with the uncertainty of the identification results and low identification accuracy due to the under-ranking of the mathematical [...] Read more.
The accurate identification of permanent magnet synchronous motor (PMSM) parameters is the basis for high-performance drive control. The traditional PMSM multiparameter identification method experiences problems with the uncertainty of the identification results and low identification accuracy due to the under-ranking of the mathematical model of motor control. A multiparameter identification of PMSM based on a model reference adaptive system and simulated annealing particle swarm optimization (MRAS-SAPSO) is proposed here. The algorithm first identifies the electrical parameters of the PMSM (stator winding resistance R, cross-axis inductance L, and magnetic linkage ψf) by means of the model reference adaptive system method. Second, the result is used as the initial population in particle swarm optimization identification to further optimize and identify the electrical and mechanical parameters (moment of inertia J and damping coefficient B) in the motor control system. Additionally, in order to avoid problems such as premature convergence of the particle swarm in the optimization search process, the results of the adaptive simulated annealing algorithm to optimize multiparameter identification are introduced. The simulation experiment results show that the five identification parameters obtained by the MRAS-SAPSO algorithm are highly accurate and stable, and the errors between them and the real values are below 2%. This also verifies the effectiveness and reliability of this identification method. Full article
(This article belongs to the Section Systems & Control Engineering)
Show Figures

Figure 1

14 pages, 2710 KiB  
Article
Evaluating Remote Task Assignment of an Online Engineering Module through Data Mining in a Virtual Communication Platform Environment
by Zoe Kanetaki, Constantinos Stergiou, Georgios Bekas, Christos Troussas and Cleo Sgouropoulou
Electronics 2022, 11(1), 158; https://doi.org/10.3390/electronics11010158 - 05 Jan 2022
Cited by 9 | Viewed by 2020
Abstract
E-learning has traditionally emphasised educational resources, web access, student participation, and social interaction. Novel virtual spaces, e-lectures, and digital laboratories have been developed with synchronous or asynchronous practices throughout the migration from face-to-face teaching modes to remote teaching during the pandemic restrictions. This [...] Read more.
E-learning has traditionally emphasised educational resources, web access, student participation, and social interaction. Novel virtual spaces, e-lectures, and digital laboratories have been developed with synchronous or asynchronous practices throughout the migration from face-to-face teaching modes to remote teaching during the pandemic restrictions. This research paper presents a case study concerning the evaluation of the online task assignment of students, using MS Teams as an electronic platform. MS Teams was evaluated to determine whether this communication platform for online lecture delivery and tasks’ assessments could be used to avoid potential problems caused during the teaching process. Students’ data were collected, and after filtering out significant information from the online questionnaires, a statistical analysis, containing a correlation and a reliability analysis, was conducted. The substantial impact of 37 variables was revealed. Cronbach’s alpha coefficient calculation revealed that 89% of the survey questions represented internally consistent and reliable variables, and for the sampling adequacy measure, Bartlett’s test was calculated at 0.816. On the basis of students’ diligence, interaction abilities, and knowledge embedding, two groups of learners were differentiated. The findings of this study shed light on the special features of fully online teaching specifically in terms of improving assessment through digital tools and merit further investigation in virtual and blended teaching spaces, with the goal of extracting outputs that will benefit the educational community. Full article
(This article belongs to the Special Issue Mobile Learning and Technology Enhanced Learning during COVID-19)
Show Figures

Figure 1

14 pages, 2100 KiB  
Article
Deep Q-Learning-Based Neural Network with Privacy Preservation Method for Secure Data Transmission in Internet of Things (IoT) Healthcare Application
by Nirmala Devi Kathamuthu, Annadurai Chinnamuthu, Nelson Iruthayanathan, Manikandan Ramachandran and Amir H. Gandomi
Electronics 2022, 11(1), 157; https://doi.org/10.3390/electronics11010157 - 04 Jan 2022
Cited by 21 | Viewed by 3238
Abstract
The healthcare industry is being transformed by the Internet of Things (IoT), as it provides wide connectivity among physicians, medical devices, clinical and nursing staff, and patients to simplify the task of real-time monitoring. As the network is vast and heterogeneous, opportunities and [...] Read more.
The healthcare industry is being transformed by the Internet of Things (IoT), as it provides wide connectivity among physicians, medical devices, clinical and nursing staff, and patients to simplify the task of real-time monitoring. As the network is vast and heterogeneous, opportunities and challenges are presented in gathering and sharing information. Focusing on patient information such as health status, medical devices used by such patients must be protected to ensure safety and privacy. Healthcare information is confidentially shared among experts for analyzing healthcare and to provide treatment on time for patients. Cryptographic and biometric systems are widely used, including deep-learning (DL) techniques to authenticate and detect anomalies, andprovide security for medical systems. As sensors in the network are energy-restricted devices, security and efficiency must be balanced, which is the most important concept to be considered while deploying a security system based on deep-learning approaches. Hence, in this work, an innovative framework, the deep Q-learning-based neural network with privacy preservation method (DQ-NNPP), was designed to protect data transmission from external threats with less encryption and decryption time. This method is used to process patient data, which reduces network traffic. This process also reduces the cost and error of communication. Comparatively, the proposed model outperformed some standard approaches, such as thesecure and anonymous biometric based user authentication scheme (SAB-UAS), MSCryptoNet, and privacy-preserving disease prediction (PPDP). Specifically, the proposed method achieved accuracy of 93.74%, sensitivity of 92%, specificity of 92.1%, communication overhead of 67.08%, 58.72 ms encryption time, and 62.72 ms decryption time. Full article
Show Figures

Figure 1

29 pages, 6795 KiB  
Review
Artificial Neural Networks and Deep Learning Techniques Applied to Radar Target Detection: A Review
by Wen Jiang, Yihui Ren, Ying Liu and Jiaxu Leng
Electronics 2022, 11(1), 156; https://doi.org/10.3390/electronics11010156 - 04 Jan 2022
Cited by 20 | Viewed by 8121
Abstract
Radar target detection (RTD) is a fundamental but important process of the radar system, which is designed to differentiate and measure targets from a complex background. Deep learning methods have gained great attention currently and have turned out to be feasible solutions in [...] Read more.
Radar target detection (RTD) is a fundamental but important process of the radar system, which is designed to differentiate and measure targets from a complex background. Deep learning methods have gained great attention currently and have turned out to be feasible solutions in radar signal processing. Compared with the conventional RTD methods, deep learning-based methods can extract features automatically and yield more accurate results. Applying deep learning to RTD is considered as a novel concept. In this paper, we review the applications of deep learning in the field of RTD and summarize the possible limitations. This work is timely due to the increasing number of research works published in recent years. We hope that this survey will provide guidelines for future studies and applications of deep learning in RTD and related areas of radar signal processing. Full article
(This article belongs to the Special Issue Theory and Applications of Fuzzy Systems and Neural Networks)
Show Figures

Figure 1

30 pages, 2107 KiB  
Article
Towards Human Stress and Activity Recognition: A Review and a First Approach Based on Low-Cost Wearables
by Juan Antonio Castro-García, Alberto Jesús Molina-Cantero, Isabel María Gómez-González, Sergio Lafuente-Arroyo and Manuel Merino-Monge
Electronics 2022, 11(1), 155; https://doi.org/10.3390/electronics11010155 - 04 Jan 2022
Cited by 14 | Viewed by 3152
Abstract
Detecting stress when performing physical activities is an interesting field that has received relatively little research interest to date. In this paper, we took a first step towards redressing this, through a comprehensive review and the design of a low-cost body area network [...] Read more.
Detecting stress when performing physical activities is an interesting field that has received relatively little research interest to date. In this paper, we took a first step towards redressing this, through a comprehensive review and the design of a low-cost body area network (BAN) made of a set of wearables that allow physiological signals and human movements to be captured simultaneously. We used four different wearables: OpenBCI and three other open-hardware custom-made designs that communicate via bluetooth low energy (BLE) to an external computer—following the edge-computingconcept—hosting applications for data synchronization and storage. We obtained a large number of physiological signals (electroencephalography (EEG), electrocardiography (ECG), breathing rate (BR), electrodermal activity (EDA), and skin temperature (ST)) with which we analyzed internal states in general, but with a focus on stress. The findings show the reliability and feasibility of the proposed body area network (BAN) according to battery lifetime (greater than 15 h), packet loss rate (0% for our custom-made designs), and signal quality (signal-noise ratio (SNR) of 9.8 dB for the ECG circuit, and 61.6 dB for the EDA). Moreover, we conducted a preliminary experiment to gauge the main ECG features for stress detection during rest. Full article
Show Figures

Figure 1

17 pages, 1671 KiB  
Article
An Efficient Method for Generating Adversarial Malware Samples
by Yuxin Ding, Miaomiao Shao, Cai Nie and Kunyang Fu
Electronics 2022, 11(1), 154; https://doi.org/10.3390/electronics11010154 - 04 Jan 2022
Cited by 4 | Viewed by 1930
Abstract
Deep learning methods have been applied to malware detection. However, deep learning algorithms are not safe, which can easily be fooled by adversarial samples. In this paper, we study how to generate malware adversarial samples using deep learning models. Gradient-based methods are usually [...] Read more.
Deep learning methods have been applied to malware detection. However, deep learning algorithms are not safe, which can easily be fooled by adversarial samples. In this paper, we study how to generate malware adversarial samples using deep learning models. Gradient-based methods are usually used to generate adversarial samples. These methods generate adversarial samples case-by-case, which is very time-consuming to generate a large number of adversarial samples. To address this issue, we propose a novel method to generate adversarial malware samples. Different from gradient-based methods, we extract feature byte sequences from benign samples. Feature byte sequences represent the characteristics of benign samples and can affect classification decision. We directly inject feature byte sequences into malware samples to generate adversarial samples. Feature byte sequences can be shared to produce different adversarial samples, which can efficiently generate a large number of adversarial samples. We compare the proposed method with the randomly injecting and gradient-based methods. The experimental results show that the adversarial samples generated using our proposed method have a high successful rate. Full article
(This article belongs to the Special Issue High Accuracy Detection of Mobile Malware Using Machine Learning)
Show Figures

Figure 1

11 pages, 3750 KiB  
Article
A New Memristive Neuron Map Model and Its Network’s Dynamics under Electrochemical Coupling
by Balamurali Ramakrishnan, Mahtab Mehrabbeik, Fatemeh Parastesh, Karthikeyan Rajagopal and Sajad Jafari
Electronics 2022, 11(1), 153; https://doi.org/10.3390/electronics11010153 - 04 Jan 2022
Cited by 31 | Viewed by 1964
Abstract
A memristor is a vital circuit element that can mimic biological synapses. This paper proposes the memristive version of a recently proposed map neuron model based on the phase space. The dynamic of the memristive map model is investigated by using bifurcation and [...] Read more.
A memristor is a vital circuit element that can mimic biological synapses. This paper proposes the memristive version of a recently proposed map neuron model based on the phase space. The dynamic of the memristive map model is investigated by using bifurcation and Lyapunov exponents’ diagrams. The results prove that the memristive map can present different behaviors such as spiking, periodic bursting, and chaotic bursting. Then, a ring network is constructed by hybrid electrical and chemical synapses, and the memristive neuron models are used to describe the nodes. The collective behavior of the network is studied. It is observed that chemical coupling plays a crucial role in synchronization. Different kinds of synchronization, such as imperfect synchronization, complete synchronization, solitary state, two-cluster synchronization, chimera, and nonstationary chimera, are identified by varying the coupling strengths. Full article
(This article belongs to the Special Issue Feature Papers in Circuit and Signal Processing)
Show Figures

Figure 1

14 pages, 3858 KiB  
Article
Pipeline Leak Detection and Estimation Using Fuzzy PID Observer
by Raheleh Jafari, Sina Razvarz, Cristóbal Vargas-Jarillo, Alexander Gegov and Farzad Arabikhan
Electronics 2022, 11(1), 152; https://doi.org/10.3390/electronics11010152 - 04 Jan 2022
Cited by 2 | Viewed by 1561
Abstract
A pipe is a ubiquitous product in the industries that is used to convey liquids, gases, or solids suspended in a liquid, e.g., a slurry, from one location to another. Both internal and external cracking can result in structural failure of the industrial [...] Read more.
A pipe is a ubiquitous product in the industries that is used to convey liquids, gases, or solids suspended in a liquid, e.g., a slurry, from one location to another. Both internal and external cracking can result in structural failure of the industrial piping system and possibly decrease the service life of the equipment. The chaos and complexity associated with the uncertain behaviour inherent in pipeline systems lead to difficulty in detection and localisation of leaks in real time. The timely detection of leakage is important in order to reduce the loss rate and serious environmental consequences. The objective of this paper is to propose a new leak detection method based on an autoregressive with exogenous input (ARX) Laguerre fuzzy proportional-integral-derivative (PID) observation system. The objective of this paper is to propose a new leak detection method based on an autoregressive with exogenous input (ARX) Laguerre fuzzy proportional-integral-derivative (PID) observation system. In this work, the ARX–Laguerre model has been used to generate better performance in the presence of uncertainty. According to the results, the proposed technique can detect leaks accurately and effectively. Full article
(This article belongs to the Special Issue Theory and Applications of Fuzzy Systems and Neural Networks)
Show Figures

Figure 1

15 pages, 11651 KiB  
Article
Joint Estimation of SOC and Available Capacity of Power Lithium-Ion Battery
by Bo Huang, Changhe Liu, Minghui Hu, Lan Li, Guoqing Jin and Huiqian Yang
Electronics 2022, 11(1), 151; https://doi.org/10.3390/electronics11010151 - 04 Jan 2022
Cited by 4 | Viewed by 1487
Abstract
Temperature has an important effect on the battery model. A dual-polarization equivalent circuit model considering temperature is established to quantify the effect of temperature, and the initial parameters of the model are identified through experiments. To solve the defect of preset noise, the [...] Read more.
Temperature has an important effect on the battery model. A dual-polarization equivalent circuit model considering temperature is established to quantify the effect of temperature, and the initial parameters of the model are identified through experiments. To solve the defect of preset noise, the H-infinity filter algorithm is used to replace the traditional extended Kalman filter algorithm, without assuming that the process noise and measurement noise obey Gaussian distribution. To eliminate the influence of battery aging on SOC estimation, and considering the different time-varying characteristics of the battery states and parameters, the dual time scale double H-infinity filter is used to jointly estimate the revised SOC and available capacity. The simulation results at two temperatures show that, compared with the single time scale, the double time scale double H-infinity filter reduces the simulation time by nearly 90% under the premise that the accuracy is almost unchanged, which proves that the proposed joint estimation algorithm has the dual advantages of high precision and high efficiency. Full article
(This article belongs to the Section Power Electronics)
Show Figures

Figure 1

21 pages, 11004 KiB  
Article
Underwater Image Enhancement Using Improved CNN Based Defogging
by Meicheng Zheng and Weilin Luo
Electronics 2022, 11(1), 150; https://doi.org/10.3390/electronics11010150 - 04 Jan 2022
Cited by 18 | Viewed by 3969
Abstract
Due to refraction, absorption, and scattering of light by suspended particles in water, underwater images are characterized by low contrast, blurred details, and color distortion. In this paper, a fusion algorithm to restore and enhance underwater images is proposed. It consists of a [...] Read more.
Due to refraction, absorption, and scattering of light by suspended particles in water, underwater images are characterized by low contrast, blurred details, and color distortion. In this paper, a fusion algorithm to restore and enhance underwater images is proposed. It consists of a color restoration module, an end-to-end defogging module and a brightness equalization module. In the color restoration module, a color balance algorithm based on CIE Lab color model is proposed to alleviate the effect of color deviation in underwater images. In the end-to-end defogging module, one end is the input image and the other end is the output image. A CNN network is proposed to connect these two ends and to improve the contrast of the underwater images. In the CNN network, a sub-network is used to reduce the depth of the network that needs to be designed to obtain the same features. Several depth separable convolutions are used to reduce the amount of calculation parameters required during network training. The basic attention module is introduced to highlight some important areas in the image. In order to improve the defogging network’s ability to extract overall information, a cross-layer connection and pooling pyramid module are added. In the brightness equalization module, a contrast limited adaptive histogram equalization method is used to coordinate the overall brightness. The proposed fusion algorithm for underwater image restoration and enhancement is verified by experiments and comparison with previous deep learning models and traditional methods. Comparison results show that the color correction and detail enhancement by the proposed method are superior. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

15 pages, 25833 KiB  
Article
Modeling and Fabrication of a Reconfigurable RF Output Stage for Nanosatellite Communication Subsystems
by Jose L. Alvarez-Flores, Jorge Flores-Troncoso, Leonel Soriano-Equigua, Jorge Simón, Joel A. Castillo, Ramón Parra-Michel and Viktor I. Rodriguez-Abdala
Electronics 2022, 11(1), 149; https://doi.org/10.3390/electronics11010149 - 04 Jan 2022
Cited by 1 | Viewed by 2051
Abstract
Current small satellite platforms such as CubeSats require robust and versatile communication subsystems that allow the reconfiguration of the critical operating parameters such as carrier frequency, transmission power, bandwidth, or filter roll-off factor. A reconfigurable Analog Back-End for the space segment of a [...] Read more.
Current small satellite platforms such as CubeSats require robust and versatile communication subsystems that allow the reconfiguration of the critical operating parameters such as carrier frequency, transmission power, bandwidth, or filter roll-off factor. A reconfigurable Analog Back-End for the space segment of a satellite communication subsystem is presented in this work. This prototype is implemented on a 9.5 cm2 6-layer PCB, and it operates from 0.070 to 6 GHz and complies with CubeSat and IPC-2221 standards. The processing, control, and synchronizing stages are carried out on a Software-Defined Radio approach executed on a baseband processor. Results showed that the signal power at the output of the proposed Analog Back-End is suitable for feeding the following antenna subsystem. Furthermore, the emitted radiation levels by the transmission lines do not generate electromagnetic interference. Full article
(This article belongs to the Section Circuit and Signal Processing)
Show Figures

Figure 1

16 pages, 3849 KiB  
Article
Ensemble Averaging of Transfer Learning Models for Identification of Nutritional Deficiency in Rice Plant
by Mayuri Sharma, Keshab Nath, Rupam Kumar Sharma, Chandan Jyoti Kumar and Ankit Chaudhary
Electronics 2022, 11(1), 148; https://doi.org/10.3390/electronics11010148 - 04 Jan 2022
Cited by 35 | Viewed by 3532
Abstract
Computer vision-based automation has become popular in detecting and monitoring plants’ nutrient deficiencies in recent times. The predictive model developed by various researchers were so designed that it can be used in an embedded system, keeping in mind the availability of computational resources. [...] Read more.
Computer vision-based automation has become popular in detecting and monitoring plants’ nutrient deficiencies in recent times. The predictive model developed by various researchers were so designed that it can be used in an embedded system, keeping in mind the availability of computational resources. Nevertheless, the enormous popularity of smart phone technology has opened the door of opportunity to common farmers to have access to high computing resources. To facilitate smart phone users, this study proposes a framework of hosting high end systems in the cloud where processing can be done, and farmers can interact with the cloud-based system. With the availability of high computational power, many studies have been focused on applying convolutional Neural Networks-based Deep Learning (CNN-based DL) architectures, including Transfer learning (TL) models on agricultural research. Ensembling of various TL architectures has the potential to improve the performance of predictive models by a great extent. In this work, six TL architectures viz. InceptionV3, ResNet152V2, Xception, DenseNet201, InceptionResNetV2, and VGG19 are considered, and their various ensemble models are used to carry out the task of deficiency diagnosis in rice plants. Two publicly available datasets from Mendeley and Kaggle are used in this study. The ensemble-based architecture enhanced the highest classification accuracy to 100% from 99.17% in the Mendeley dataset, while for the Kaggle dataset; it was enhanced to 92% from 90%. Full article
(This article belongs to the Special Issue Hybrid Developments in Cyber Security and Threat Analysis)
Show Figures

Figure 1

13 pages, 2596 KiB  
Article
Enhanced Millimeter-Wave 3-D Imaging via Complex-Valued Fully Convolutional Neural Network
by Handan Jing, Shiyong Li, Ke Miao, Shuoguang Wang, Xiaoxi Cui, Guoqiang Zhao and Houjun Sun
Electronics 2022, 11(1), 147; https://doi.org/10.3390/electronics11010147 - 04 Jan 2022
Cited by 14 | Viewed by 2400
Abstract
To solve the problems of high computational complexity and unstable image quality inherent in the compressive sensing (CS) method, we propose a complex-valued fully convolutional neural network (CVFCNN)-based method for near-field enhanced millimeter-wave (MMW) three-dimensional (3-D) imaging. A generalized form of the complex [...] Read more.
To solve the problems of high computational complexity and unstable image quality inherent in the compressive sensing (CS) method, we propose a complex-valued fully convolutional neural network (CVFCNN)-based method for near-field enhanced millimeter-wave (MMW) three-dimensional (3-D) imaging. A generalized form of the complex parametric rectified linear unit (CPReLU) activation function with independent and learnable parameters is presented to improve the performance of CVFCNN. The CVFCNN structure is designed, and the formulas of the complex-valued back-propagation algorithm are derived in detail, in response to the lack of a machine learning library for a complex-valued neural network (CVNN). Compared with a real-valued fully convolutional neural network (RVFCNN), the proposed CVFCNN offers better performance while needing fewer parameters. In addition, it outperforms the CVFCNN that was used in radar imaging with different activation functions. Numerical simulations and experiments are provided to verify the efficacy of the proposed network, in comparison with state-of-the-art networks and the CS method for enhanced MMW imaging. Full article
(This article belongs to the Section Microwave and Wireless Communications)
Show Figures

Figure 1

4 pages, 195 KiB  
Editorial
Recent Advances in Antenna Design for 5G Heterogeneous Networks
by Issa Elfergani, Abubakar Sadiq Hussaini, Jonathan Rodriguez and Raed A. Abd-Alhameed
Electronics 2022, 11(1), 146; https://doi.org/10.3390/electronics11010146 - 04 Jan 2022
Cited by 1 | Viewed by 1553
Abstract
Fifth-generation will support significantly faster mobile broadband speeds, low latency, and reliable communications, as well as enabling the full potential of the Internet of Things (IoT) [...] Full article
(This article belongs to the Special Issue Recent Advances in Antenna Design for 5G Heterogeneous Networks)
16 pages, 1079 KiB  
Article
Analysing Factory Workers’ Acceptance of Collaborative Robots: A Web-Based Tool for Company Representatives
by Marco Baumgartner, Tobias Kopp and Steffen Kinkel
Electronics 2022, 11(1), 145; https://doi.org/10.3390/electronics11010145 - 04 Jan 2022
Cited by 10 | Viewed by 3289
Abstract
Collaborative robots are a new type of lightweight robots that are especially suitable for small and medium-sized enterprises. They offer new interaction opportunities and thereby pose new challenges with regard to technology acceptance. Despite acknowledging the importance of acceptance issues, small and medium-sized [...] Read more.
Collaborative robots are a new type of lightweight robots that are especially suitable for small and medium-sized enterprises. They offer new interaction opportunities and thereby pose new challenges with regard to technology acceptance. Despite acknowledging the importance of acceptance issues, small and medium-sized enterprises often lack coherent strategies to identify barriers and foster acceptance. Therefore, in this article, we present a collection of crucial acceptance factors with regard to collaborative robot use at the industrial workplace. Based on these factors, we present a web-based tool to estimate employee acceptance, to provide company representatives with practical recommendations and to stimulate reflection on acceptance issues. An evaluation with three German small and medium-sized enterprises reveals that the tool’s concept meets the demands of small and medium-sized enterprises and is perceived as beneficial as it raises awareness and deepens knowledge on this topic. In order to realise economic potentials, further low-threshold usable tools are needed to transfer research findings into the daily practice of small and medium-sized enterprises. Full article
(This article belongs to the Special Issue Human-Robot Collaboration in Manufacturing)
Show Figures

Figure 1

12 pages, 5737 KiB  
Article
Dual-Band, Dual-Output Power Amplifier Using Simplified Three-Port, Frequency-Dividing Matching Network
by Xiaopan Chen, Yongle Wu and Weimin Wang
Electronics 2022, 11(1), 144; https://doi.org/10.3390/electronics11010144 - 04 Jan 2022
Cited by 4 | Viewed by 2411
Abstract
This study presents a dual-band power amplifier (PA) with two output ports using a simplified three-port, frequency-dividing matching network. The dual-band, dual-output PA could amplify a dual-band signal with one transistor, and the diplexer-like output matching network (OMN) divided the two bands into [...] Read more.
This study presents a dual-band power amplifier (PA) with two output ports using a simplified three-port, frequency-dividing matching network. The dual-band, dual-output PA could amplify a dual-band signal with one transistor, and the diplexer-like output matching network (OMN) divided the two bands into different output ports. A structure consisting of a λ/4 open stub and a λ/4 transmission line was applied to restrain undesired signals, which made each branch equivalent to an open circuit at another frequency. A three-stub design reduced the complexity of the OMN. Second-order harmonic impedances were tuned for better efficiency. The PA was designed with a 10-W gallium nitride high electron mobility transistor (GaN HEMT). It achieved a drain efficiency (DE) of 55.84% and 53.77%, with the corresponding output power of 40.22 and 40.77 dBm at 3.5 and 5.0 GHz, respectively. The 40%-DE bandwidths were over 200 MHz in the two bands. Full article
(This article belongs to the Special Issue Feature Papers in Circuit and Signal Processing)
Show Figures

Figure 1

13 pages, 1865 KiB  
Article
Facial Skincare Products’ Recommendation with Computer Vision Technologies
by Ting-Yu Lin, Hung-Tse Chan, Chih-Hsien Hsia and Chin-Feng Lai
Electronics 2022, 11(1), 143; https://doi.org/10.3390/electronics11010143 - 03 Jan 2022
Cited by 7 | Viewed by 5387
Abstract
Acne is a skin issue that plagues many young people and adults. Even if it is cured, it leaves acne spots or acne scars, which drives many individuals to use skincare products or undertake medical treatment. On the contrary, the use of inappropriate [...] Read more.
Acne is a skin issue that plagues many young people and adults. Even if it is cured, it leaves acne spots or acne scars, which drives many individuals to use skincare products or undertake medical treatment. On the contrary, the use of inappropriate skincare products can exacerbate the condition of the skin. In view of this, this work proposes the use of computer vision (CV) technology to realize a new business model of facial skincare products. The overall framework is composed of a finger vein identification system, skincare products’ recommendation system, and electronic payment system. A finger vein identification system is used as identity verification and personalized service. A skincare products’ recommendation system provides consumers with professional skin analysis through skin type classification and acne detection to recommend skincare products that finally improve skin issues of consumers. An electronic payment system provides a variety of checkout methods, and the system will check out by finger-vein connections according to membership information. Experimental results showed that the equal error rate (EER) comparison of the FV-USM public database on the finger-vein system was the lowest and the response time was the shortest. Additionally, the comparison of the skin type classification accuracy was the highest. Full article
(This article belongs to the Special Issue Applied Deep Learning and Multimedia Electronics)
Show Figures

Figure 1

11 pages, 337 KiB  
Article
Persistent Postural-Perceptual Dizziness Interventions—An Embodied Insight on the Use Virtual Reality for Technologists
by Syed Fawad M. Zaidi, Niusha Shafiabady and Justin Beilby
Electronics 2022, 11(1), 142; https://doi.org/10.3390/electronics11010142 - 03 Jan 2022
Cited by 4 | Viewed by 2767
Abstract
Persistent and inconsistent unsteadiness with nonvertiginous dizziness (persistent postural-perceptual dizziness (PPPD)) could negatively impact quality of life. This study highlights that the use of virtual reality (VR) systems offers bimodal benefits to PPPD, such as understanding symptoms and providing a basis for treatment. [...] Read more.
Persistent and inconsistent unsteadiness with nonvertiginous dizziness (persistent postural-perceptual dizziness (PPPD)) could negatively impact quality of life. This study highlights that the use of virtual reality (VR) systems offers bimodal benefits to PPPD, such as understanding symptoms and providing a basis for treatment. The aim is to develop an understanding of PPPD and its interventions, including current trends of VR involvement to extrapolate and re-evaluate VR design strategies. Therefore, recent virtual-reality-based research work that progressed in understanding PPPD is identified, collected, and analysed. This study proposes a novel approach to the understanding of PPPD, specifically for VR technologists, and examines the principles of effectively aligning VR development for PPPD interventions. Full article
(This article belongs to the Special Issue Virtual Reality and Scientific Visualization)
Show Figures

Figure 1

48 pages, 12306 KiB  
Review
A Survey of Recommendation Systems: Recommendation Models, Techniques, and Application Fields
by Hyeyoung Ko, Suyeon Lee, Yoonseo Park and Anna Choi
Electronics 2022, 11(1), 141; https://doi.org/10.3390/electronics11010141 - 03 Jan 2022
Cited by 148 | Viewed by 31285
Abstract
This paper reviews the research trends that link the advanced technical aspects of recommendation systems that are used in various service areas and the business aspects of these services. First, for a reliable analysis of recommendation models for recommendation systems, data mining technology, [...] Read more.
This paper reviews the research trends that link the advanced technical aspects of recommendation systems that are used in various service areas and the business aspects of these services. First, for a reliable analysis of recommendation models for recommendation systems, data mining technology, and related research by application service, more than 135 top-ranking articles and top-tier conferences published in Google Scholar between 2010 and 2021 were collected and reviewed. Based on this, studies on recommendation system models and the technology used in recommendation systems were systematized, and research trends by year were analyzed. In addition, the application service fields where recommendation systems were used were classified, and research on the recommendation system model and recommendation technique used in each field was analyzed. Furthermore, vast amounts of application service-related data used by recommendation systems were collected from 2010 to 2021 without taking the journal ranking into consideration and reviewed along with various recommendation system studies, as well as applied service field industry data. As a result of this study, it was found that the flow and quantitative growth of various detailed studies of recommendation systems interact with the business growth of the actual applied service field. While providing a comprehensive summary of recommendation systems, this study provides insight to many researchers interested in recommendation systems through the analysis of its various technologies and trends in the service field to which recommendation systems are applied. Full article
(This article belongs to the Special Issue Electronic Solutions for Artificial Intelligence Healthcare Volume II)
Show Figures

Graphical abstract

19 pages, 6166 KiB  
Article
Optimized Deep Learning Algorithms for Tomato Leaf Disease Detection with Hardware Deployment
by Hesham Tarek, Hesham Aly, Saleh Eisa and Mohamed Abul-Soud
Electronics 2022, 11(1), 140; https://doi.org/10.3390/electronics11010140 - 03 Jan 2022
Cited by 36 | Viewed by 4492
Abstract
Smart agriculture has taken more attention during the last decade due to the bio-hazards of climate change impacts, extreme weather events, population explosion, food security demands and natural resources shortage. The Egyptian government has taken initiative in dealing with plants diseases especially tomato [...] Read more.
Smart agriculture has taken more attention during the last decade due to the bio-hazards of climate change impacts, extreme weather events, population explosion, food security demands and natural resources shortage. The Egyptian government has taken initiative in dealing with plants diseases especially tomato which is one of the most important vegetable crops worldwide that are affected by many diseases causing high yield loss. Deep learning techniques have become the main focus in the direction of identifying tomato leaf diseases. This study evaluated different deep learning models pre-trained on ImageNet dataset such as ResNet50, InceptionV3, AlexNet, MobileNetV1, MobileNetV2 and MobileNetV3.To the best of our knowledge MobileNetV3 has not been tested on tomato leaf diseases. Each of the former deep learning models has been evaluated and optimized with different techniques. The evaluation shows that MobileNetV3 Small has achieved an accuracy of 98.99% while MobileNetV3 Large has achieved an accuracy of 99.81%. All models have been deployed on a workstation to evaluate their performance by calculating the prediction time on tomato leaf images. The models were also deployed on a Raspberry Pi 4 in order to build an Internet of Things (IoT) device capable of tomato leaf disease detection. MobileNetV3 Small had a latency of 66 ms and 251 ms on the workstation and the Raspberry Pi 4, respectively. On the other hand, MobileNetV3 Large had a latency of 50 ms on the workstation and 348 ms on the Raspberry Pi 4. Full article
(This article belongs to the Collection Electronics for Agriculture)
Show Figures

Figure 1

Previous Issue
Back to TopTop