Next Issue
Volume 11, November-2
Previous Issue
Volume 11, October-2
 
 

Electronics, Volume 11, Issue 21 (November-1 2022) – 219 articles

Cover Story (view full-size image): Resistive voltage dividers tend to have a highly nonlinear transfer function as parasitic and stray capacitances exert an increasing influence with increasing frequency. As a consequence, the measured signal differs in shape from the original input signal. Here, we present an improved resistive voltage divider using surface-mounted resistors with an additional compensation electrode on the adjacent side of a printed circuit board to extend the linear bandwidth. This new concept improves the linear bandwidth from 115 kHz to 88 MHz, while maintaining a DC input impedance of 10 MΩ. Finally, the developed resistive voltage divider was successfully used to measure fast high-voltage transients. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
17 pages, 1136 KiB  
Article
Federated Deep Reinforcement Learning for Joint AeBSs Deployment and Computation Offloading in Aerial Edge Computing Network
by Lei Liu, Yikun Zhao, Fei Qi, Fanqin Zhou, Weiliang Xie, Haoran He and Hao Zheng
Electronics 2022, 11(21), 3641; https://doi.org/10.3390/electronics11213641 - 7 Nov 2022
Cited by 2 | Viewed by 1983
Abstract
In the 6G aerial network, all aerial communication nodes have computing and storage functions and can perform real-time wireless signal processing and resource management. In order to make full use of the computing resources of aerial nodes, this paper studies the mobile edge [...] Read more.
In the 6G aerial network, all aerial communication nodes have computing and storage functions and can perform real-time wireless signal processing and resource management. In order to make full use of the computing resources of aerial nodes, this paper studies the mobile edge computing (MEC) system based on aerial base stations (AeBSs), proposes the joint optimization problem of computation the offloading and deployment control of AeBSs for the goals of the lowest task processing delay and energy consumption, and designs a deployment and computation offloading scheme based on federated deep reinforcement learning. Specifically, each low-altitude AeBS agent simultaneously trains two neural networks to handle the generation of the deployment and offloading strategies, respectively, and a high-altitude global node aggregates the local model parameters uploaded by each low-altitude platform. The agents can be trained offline and updated quickly online according to changes in the environment and can quickly generate the optimal deployment and offloading strategies. The simulation results show that our method can achieve good performance in a very short time. Full article
(This article belongs to the Topic Machine Learning in Communication Systems and Networks)
Show Figures

Figure 1

24 pages, 5885 KiB  
Article
Anomalous Behavior Detection Based on the Isolation Forest Model with Multiple Perspective Business Processes
by Na Fang, Xianwen Fang and Ke Lu
Electronics 2022, 11(21), 3640; https://doi.org/10.3390/electronics11213640 - 7 Nov 2022
Cited by 2 | Viewed by 2297
Abstract
Anomalous behavior detection in business processes inspects abnormal situations, such as errors and missing values in system execution records, to facilitate safe system operation. Since anomaly information hinders the insightful investigation of event logs, many approaches have contributed to anomaly detection in either [...] Read more.
Anomalous behavior detection in business processes inspects abnormal situations, such as errors and missing values in system execution records, to facilitate safe system operation. Since anomaly information hinders the insightful investigation of event logs, many approaches have contributed to anomaly detection in either the business process domain or the data mining domain. However, most of them ignore the impact brought by the interaction between activities and their related attributes. Based on this, a method is constructed to integrate the consistency degree of multi-perspective log features and use it in an isolation forest model for anomaly detection. First, a reference model is captured from the event logs using process discovery. After that, the similarity between behaviors is analyzed based on the neighborhood distance between the logs and the reference model, and the data flow similarity is measured based on the matching relationship of the process activity attributes. Then, the integration consistency measure is constructed. Based on this, the composite log feature vectors are produced by combining the activity sequences and attribute sequences in the event logs and are fed to the isolation forest model for training. Subsequently, anomaly scores are calculated and anomalous behavior is determined based on different threshold-setting strategies. Finally, the proposed algorithm is implemented using the Scikit-learn framework and evaluated in real logs regarding anomalous behavior recognition rate and model quality improvement. The experimental results show that the algorithm can detect abnormal behaviors in event logs and improve the model quality. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

15 pages, 1347 KiB  
Article
Cost-Effective and Energy-Aware Resource Allocation in Cloud Data Centers
by Abadhan Saumya Sabyasachi and Jogesh K. Muppala
Electronics 2022, 11(21), 3639; https://doi.org/10.3390/electronics11213639 - 7 Nov 2022
Cited by 2 | Viewed by 1888
Abstract
Cloud computing supports the fast expansion of data and computer centers; therefore, energy and load balancing are vital concerns. The growing popularity of cloud computing has raised power usage and network costs. Frequent calls for computational resources may cause system instability; further, load [...] Read more.
Cloud computing supports the fast expansion of data and computer centers; therefore, energy and load balancing are vital concerns. The growing popularity of cloud computing has raised power usage and network costs. Frequent calls for computational resources may cause system instability; further, load balancing in the host requires migrating virtual machines (VM) from overloaded to underloaded hosts, which affects energy usage. The proposed cost-efficient whale optimization algorithm for virtual machine (CEWOAVM) technique helps to more effectively place migrating virtual machines. CEWOAVM optimizes system resources such as CPU, storage, and memory. This study proposes energy-aware virtual machine migration with the use of the WOA algorithm for dynamic, cost-effective cloud data centers in order to solve this problem. The experimental results showed that the proposed algorithm saved 18.6%, 27.08%, and 36.3% energy when compared with the PSOCM, RAPSO-VMP, and DTH-MF algorithms, respectively. It also showed 12.68%, 18.7%, and 27.9% improvements for the number of virtual machine migrations and 14.4%, 17.8%, and 23.8% reduction in SLA violation, respectively. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

13 pages, 2084 KiB  
Article
Multitopic Coherence Extraction for Global Entity Linking
by Chao Zhang, Zhao Li, Shiwei Wu, Tong Chen and Xiuhao Zhao
Electronics 2022, 11(21), 3638; https://doi.org/10.3390/electronics11213638 - 7 Nov 2022
Cited by 1 | Viewed by 1156
Abstract
Entity linking is a process of linking mentions in a document with entities in a knowledge base. Collective entity disambiguation refers to mapping of multiple mentions in a document with their corresponding entities in a knowledge base. Most previous research has been based [...] Read more.
Entity linking is a process of linking mentions in a document with entities in a knowledge base. Collective entity disambiguation refers to mapping of multiple mentions in a document with their corresponding entities in a knowledge base. Most previous research has been based on the assumption that all mentions in the same document represent the same topic. However, mentions usually correspond to different topics. In this article, we proposes a new global model to explore the extraction of multitopic coherence in the same document. Herein, we present mention association graphs and candidate entity association graphs to obtain multitopic coherence features of the same document using graph neural networks (GNNs). In particular, we propose a variant GNN for our model and a particular graph readout function. We conducted extensive experiments on several datasets to demonstrate the effectiveness to the proposed model. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

30 pages, 13794 KiB  
Article
Visual Saliency and Image Reconstruction from EEG Signals via an Effective Geometric Deep Network-Based Generative Adversarial Network
by Nastaran Khaleghi, Tohid Yousefi Rezaii, Soosan Beheshti, Saeed Meshgini, Sobhan Sheykhivand and Sebelan Danishvar
Electronics 2022, 11(21), 3637; https://doi.org/10.3390/electronics11213637 - 7 Nov 2022
Cited by 13 | Viewed by 3063
Abstract
Reaching out the function of the brain in perceiving input data from the outside world is one of the great targets of neuroscience. Neural decoding helps us to model the connection between brain activities and the visual stimulation. The reconstruction of images from [...] Read more.
Reaching out the function of the brain in perceiving input data from the outside world is one of the great targets of neuroscience. Neural decoding helps us to model the connection between brain activities and the visual stimulation. The reconstruction of images from brain activity can be achieved through this modelling. Recent studies have shown that brain activity is impressed by visual saliency, the important parts of an image stimuli. In this paper, a deep model is proposed to reconstruct the image stimuli from electroencephalogram (EEG) recordings via visual saliency. To this end, the proposed geometric deep network-based generative adversarial network (GDN-GAN) is trained to map the EEG signals to the visual saliency maps corresponding to each image. The first part of the proposed GDN-GAN consists of Chebyshev graph convolutional layers. The input of the GDN part of the proposed network is the functional connectivity-based graph representation of the EEG channels. The output of the GDN is imposed to the GAN part of the proposed network to reconstruct the image saliency. The proposed GDN-GAN is trained using the Google Colaboratory Pro platform. The saliency metrics validate the viability and efficiency of the proposed saliency reconstruction network. The weights of the trained network are used as initial weights to reconstruct the grayscale image stimuli. The proposed network realizes the image reconstruction from EEG signals. Full article
(This article belongs to the Section Bioelectronics)
Show Figures

Figure 1

26 pages, 9699 KiB  
Article
A Vision-Based Bio-Inspired Reinforcement Learning Algorithms for Manipulator Obstacle Avoidance
by Abhilasha Singh, Mohamed Shakeel, V. Kalaichelvi and R. Karthikeyan
Electronics 2022, 11(21), 3636; https://doi.org/10.3390/electronics11213636 - 7 Nov 2022
Cited by 1 | Viewed by 2163
Abstract
Path planning for robotic manipulators has proven to be a challenging issue in industrial applications. Despite providing precise waypoints, the traditional path planning algorithm requires a predefined map and is ineffective in complex, unknown environments. Reinforcement learning techniques can be used in cases [...] Read more.
Path planning for robotic manipulators has proven to be a challenging issue in industrial applications. Despite providing precise waypoints, the traditional path planning algorithm requires a predefined map and is ineffective in complex, unknown environments. Reinforcement learning techniques can be used in cases where there is a no environmental map. For vision-based path planning and obstacle avoidance in assembly line operations, this study introduces various Reinforcement Learning (RL) algorithms based on discrete state-action space, such as Q-Learning, Deep Q Network (DQN), State-Action-Reward- State-Action (SARSA), and Double Deep Q Network (DDQN). By positioning the camera in an eye-to-hand position, this work used color-based segmentation to identify the locations of obstacles, start, and goal points. The homogeneous transformation technique was used to further convert the pixel values into robot coordinates. Furthermore, by adjusting the number of episodes, steps per episode, learning rate, and discount factor, a performance study of several RL algorithms was carried out. To further tune the training hyperparameters, genetic algorithms (GA) and particle swarm optimization (PSO) were employed. The length of the path travelled, the average reward, the average number of steps, and the time required to reach the objective point were all measured and compared for each of the test cases. Finally, the suggested methodology was evaluated using a live camera that recorded the robot workspace in real-time. The ideal path was then drawn using a TAL BRABO 5 DOF manipulator. It was concluded that waypoints obtained via Double DQN showed an improved performance and were able to avoid the obstacles and reach the goal point smoothly and efficiently. Full article
Show Figures

Figure 1

32 pages, 784 KiB  
Article
The FMI 3.0 Standard Interface for Clocked and Scheduled Simulations
by Simon Thrane Hansen, Cláudio Ângelo Gonçalves Gomes, Masoud Najafi, Torsten Sommer, Matthias Blesken, Irina Zacharias, Oliver Kotte, Pierre R. Mai, Klaus Schuch, Karl Wernersson, Christian Bertsch, Torsten Blochwitz and Andreas Junghanns
Electronics 2022, 11(21), 3635; https://doi.org/10.3390/electronics11213635 - 7 Nov 2022
Cited by 3 | Viewed by 2107
Abstract
This paper presents an overview and formalization of the Functional Mock-up Interface (FMI) 3.0. The formalization captures the new FMI 3.0 standard and is intended to be used as an introduction for conceptualizing the use of clocks in the FMI standard to support [...] Read more.
This paper presents an overview and formalization of the Functional Mock-up Interface (FMI) 3.0. The formalization captures the new FMI 3.0 standard and is intended to be used as an introduction for conceptualizing the use of clocks in the FMI standard to support the simulation of event-based cyber-physical systems. The FMI 3.0 standard supports two kinds of clock-based simulations: Synchronous Clocked Simulation to ensure predictable systems scheduling with multiple simultaneous events and scheduled execution to facilitate real-time simulations comprising multiple black-box models by allowing fine-grained control over the computation time of submodels. The formalization is a basis for developing tools for orchestrating, verifying and validating the composition of multiple FMUs. The formalization is provided as an accessible VDM-SL specification. Full article
(This article belongs to the Special Issue Selected Papers from Modelica Conference 2021)
Show Figures

Figure 1

16 pages, 2147 KiB  
Article
Trajectory Recovery Based on Interval Forward–Backward Propagation Algorithm Fusing Multi-Source Information
by Biao Zhou, Xiuwei Wang, Junhao Zhou and Changqiang Jing
Electronics 2022, 11(21), 3634; https://doi.org/10.3390/electronics11213634 - 7 Nov 2022
Cited by 1 | Viewed by 1278
Abstract
In the tracking scheme in which global navigation satellite system (GNSS) measurement is temporally lost or the sampling frequency is insufficient, dead reckoning based on the inertial measurement unit (IMU) and other location-related information can be fused as a supplement for real-time trajectory [...] Read more.
In the tracking scheme in which global navigation satellite system (GNSS) measurement is temporally lost or the sampling frequency is insufficient, dead reckoning based on the inertial measurement unit (IMU) and other location-related information can be fused as a supplement for real-time trajectory recovery. The tracking scheme based on interval analysis outputs interval results containing the ground truth, which gives it the advantage of convenience in multi-source information fusion. In this paper, a trajectory-recovery algorithm based on interval analysis is proposed, which can conveniently fuse GNSS measurement, IMU data, and map constraints and then output an interval result containing the actual trajectory. In essence, the location-related information such as satellite measurement, inertial data, and map constraints is collected by practical experiments and then converted into interval form. Thereby, the interval-overlapping calculation is performed through forward and backward propagation to accomplish the trajectory recovery. The practical experimental results show that the trajectory recovery accuracy based on the proposed algorithm performs better than the traditional Kalman filter algorithm, and the estimated interval results deterministically contain the actual trajectory. More importantly, the proposed interval algorithm is approved to be convenient to fuse additional location-related information. Full article
Show Figures

Figure 1

17 pages, 5550 KiB  
Article
Intelligent Fault Detection in Hall-Effect Rotary Encoders for Industry 4.0 Applications
by Ritik Agarwal, Ghanishtha Bhatti, R. Raja Singh, V. Indragandhi, Vishnu Suresh, Laura Jasinska and Zbigniew Leonowicz
Electronics 2022, 11(21), 3633; https://doi.org/10.3390/electronics11213633 - 7 Nov 2022
Cited by 7 | Viewed by 2421
Abstract
Sensors are the foundational components of any smart machine system and are invaluable in all modern technologies. Consequently, faults and errors in sensors can have a significant negative impact on the setup. Intelligent, lightweight, and accurate fault diagnosis and mitigation lie at the [...] Read more.
Sensors are the foundational components of any smart machine system and are invaluable in all modern technologies. Consequently, faults and errors in sensors can have a significant negative impact on the setup. Intelligent, lightweight, and accurate fault diagnosis and mitigation lie at the crux of modern industries. This study aimed to conceptualize a germane solution in the domain of fault detection, focusing on Hall-effect rotary encoders. Position monitoring through rotary encoders is essential to the safety and seamless functioning of industrial equipment such as lifts and hoists, and commercial systems such as automobiles. This work used multi-strategy learners to accurately diagnose quadrature and offset faults in Hall-effect rotary encoders. The obtained dataset was then run through a lightweight ensemble classifier to train a robust fault detection model. The complete mechanism was simulated through interconnected models simulated in a MATLAB Simulink™ environment. In real time, the developed fault detection algorithm was embedded in an FPGA controller and tested with a 1 kW PMSM drive system. The resulting system is computationally inexpensive and achieves an accuracy of 95.8%, making it a feasible solution for industrial implementation. Full article
(This article belongs to the Special Issue Metaverse and Digital Twins)
Show Figures

Figure 1

11 pages, 3867 KiB  
Article
Modelling and Analysis of Adaptive Cruise Control System Based on Synchronization Theory of Petri Nets
by Qi Guo, Wangyang Yu, Fei Hao, Yuke Zhou and Yuan Liu
Electronics 2022, 11(21), 3632; https://doi.org/10.3390/electronics11213632 - 7 Nov 2022
Cited by 4 | Viewed by 1654
Abstract
The ACC (adaptive cruise control) system has developed rapidly in recent years, and its reliability and safety have also attracted a lot of attention. The ACC system can realize automatic driving following the vehicle in the longitudinal range, and its reliability is closely [...] Read more.
The ACC (adaptive cruise control) system has developed rapidly in recent years, and its reliability and safety have also attracted a lot of attention. The ACC system can realize automatic driving following the vehicle in the longitudinal range, and its reliability is closely related to the synchronization between two vehicles. Combined with formal modelling methods, this paper analyzes and detects the logical flaw that is poor synchronization in the following process of the ACC system from the perspective of synchronization. Aiming at avoiding this kind of logical flaw, this paper presents a novel optimized modelling solution based on the synchronization theory of Petri nets and further improves the calculation method of the synchronic distance. The simulation results reduce the token accumulation by an average of 91.357%, which demonstrates that the improved model can effectively improve reliability and reduce the risk of rear-end collision. Full article
Show Figures

Figure 1

20 pages, 875 KiB  
Article
Developing Cross-Domain Host-Based Intrusion Detection
by Oluwagbemiga Ajayi, Aryya Gangopadhyay, Robert F. Erbacher and Carl Bursat
Electronics 2022, 11(21), 3631; https://doi.org/10.3390/electronics11213631 - 7 Nov 2022
Cited by 3 | Viewed by 1963
Abstract
Digital transformation has continued to have a remarkable impact on industries, creating new possibilities and improving the performance of existing ones. Recently, we have seen more deployments of cyber-physical systems and the Internet of Things (IoT) as in no other time. However, cybersecurity [...] Read more.
Digital transformation has continued to have a remarkable impact on industries, creating new possibilities and improving the performance of existing ones. Recently, we have seen more deployments of cyber-physical systems and the Internet of Things (IoT) as in no other time. However, cybersecurity is often an afterthought in the design and implementation of many systems; therefore, there usually is an introduction of new attack surfaces as new systems and applications are being deployed. Machine learning has been helpful in creating intrusion detection models, but it is impractical to create attack detection models with acceptable performance for every single computing infrastructure and the various attack scenarios due to the cost of collecting quality labeled data and training models. Hence, there is a need to develop models that can take advantage of knowledge available in a high resource source domain to improve performance of a low resource target domain model. In this work, we propose a novel cross-domain deep learning-based approach for attack detection in Host-based Intrusion Detection Systems (HIDS). Specifically, we developed a method for candidate source domain selection from among a group of potential source domains by computing the similarity score a target domain records when paired with a potential source domain. Then, using different word embedding space combination techniques and transfer learning approach, we leverage the knowledge from a well performing source domain model to improve the performance of a similar model in the target domain. To evaluate our proposed approach, we used Leipzig Intrusion Detection Dataset (LID-DS), a HIDS dataset recorded on a modern operating system that consists of different attack scenarios. Our proposed cross-domain approach recorded significant improvement in the target domains when compared with the results from in-domain approach experiments. Based on the result, the F2-score of the target domain CWE-307 improved from 80% in the in-domain approach to 87% in the cross-domain approach while the target domain CVE-2014-0160 improved from 13% to 85%. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

12 pages, 5637 KiB  
Article
A Cost-Effective and Compact All-Digital Dual-Loop Jitter Attenuator for Built-Off-Test Applications
by Seungjun Kim, Junghoon Jin and Jongsun Kim
Electronics 2022, 11(21), 3630; https://doi.org/10.3390/electronics11213630 - 7 Nov 2022
Viewed by 1676
Abstract
A compact and low-power all-digital CMOS dual-loop jitter attenuator (DJA) for low-cost built-off-test (BOT) applications such as parallel multi-DUT testing is presented. The proposed DJA adopts a new digital phase interpolator (PI)-based clock recovery (CR) loop with an adaptive decimation filter (ADF) function [...] Read more.
A compact and low-power all-digital CMOS dual-loop jitter attenuator (DJA) for low-cost built-off-test (BOT) applications such as parallel multi-DUT testing is presented. The proposed DJA adopts a new digital phase interpolator (PI)-based clock recovery (CR) loop with an adaptive decimation filter (ADF) function to remove the jitter and phase noise of the input clock, and generate a phase-aligned clean output clock. In addition, by adopting an all-digital multi-phase multiplying delay-locked loop (MDLL), eight low-jitter evenly spaced reference clocks that are required for the PI are generated. In the proposed DJA, both the MDLL and PI-based CR are first-order systems, and so this DJA has the advantage of high system stability. In addition, the proposed DJA has the benefit of a wide operating frequency range, unlike general PLL-based jitter attenuators that have a narrow frequency range and a jitter peaking problem. Implemented in a 40 nm 0.9 V CMOS process, the proposed DJA generates cleaned programmable output clock frequencies from 2.4 to 4.7 GHz. Furthermore, it achieves a peak-to-peak and RMS jitter attenuation of –25.6 dB and –32.6 dB, respectively, at 2.4 GHz. In addition, it occupies an active area of only 0.0257 mm2 and consumes a power of 7.41 mW at 2.4 GHz. Full article
(This article belongs to the Special Issue Mixed Signal Circuit Design)
Show Figures

Figure 1

12 pages, 18428 KiB  
Article
Robotic Weld Image Enhancement Based on Improved Bilateral Filtering and CLAHE Algorithm
by Peng Lu and Qingjiu Huang
Electronics 2022, 11(21), 3629; https://doi.org/10.3390/electronics11213629 - 7 Nov 2022
Cited by 5 | Viewed by 1418
Abstract
Robotic welding requires a higher weld image resolution for easy weld identification; however, the higher the resolution, the higher the cost. Therefore, this paper proposes an improved CLAHE algorithm, which can not only effectively denoise and retain edge information but also improve the [...] Read more.
Robotic welding requires a higher weld image resolution for easy weld identification; however, the higher the resolution, the higher the cost. Therefore, this paper proposes an improved CLAHE algorithm, which can not only effectively denoise and retain edge information but also improve the contrast of images. First, an improved bilateral filtering algorithm is used to process high-resolution images to remove noise while preserving edge details. Then, the CLAHE (Contrast Limited Adaptive Histogram Equalization) algorithm and Gaussian masking algorithm are used to enhance the enhanced image, and then differential processing is used to reduce the noise in the two images, while preserving the details of the image, enhancing the image contrast, and obtaining the final enhanced image. Finally, the effectiveness of the algorithm is verified by comparing the peak signal-to-noise ratio and structural similarity with other algorithms. Full article
Show Figures

Figure 1

21 pages, 8429 KiB  
Article
Efficient Deep Reinforcement Learning for Optimal Path Planning
by Jing Ren, Xishi Huang and Raymond N. Huang
Electronics 2022, 11(21), 3628; https://doi.org/10.3390/electronics11213628 - 7 Nov 2022
Cited by 9 | Viewed by 4461
Abstract
In this paper, we propose a novel deep reinforcement learning (DRL) method for optimal path planning for mobile robots using dynamic programming (DP)-based data collection. The proposed method can overcome the slow learning process and improve training data quality inherently in DRL algorithms. [...] Read more.
In this paper, we propose a novel deep reinforcement learning (DRL) method for optimal path planning for mobile robots using dynamic programming (DP)-based data collection. The proposed method can overcome the slow learning process and improve training data quality inherently in DRL algorithms. The main idea of our approach is as follows. First, we mapped the dynamic programming method to typical optimal path planning problems for mobile robots, and created a new efficient DP-based method to find an exact, analytical, optimal solution for the path planning problem. Then, we used high-quality training data gathered using the DP method for DRL, which greatly improves training data quality and learning efficiency. Next, we established a two-stage reinforcement learning method where, prior to the DRL, we employed extreme learning machines (ELM) to initialize the parameters of actor and critic neural networks to a near-optimal solution in order to significantly improve the learning performance. Finally, we illustrated our method using some typical path planning tasks. The experimental results show that our DRL method can converge much easier and faster than other methods. The resulting action neural network is able to successfully guide robots from any start position in the environment to the goal position while following the optimal path and avoiding collision with obstacles. Full article
(This article belongs to the Special Issue Autonomous Vehicles Technological Trends)
Show Figures

Figure 1

18 pages, 5710 KiB  
Communication
A Design Fiber Performance Monitoring Tool (FPMT) for Online Remote Fiber Line Performance Detection
by Ahmed Atef Ibrahim, Mohammed Mohammed Fouad and Azhar Ahmed Hamdi
Electronics 2022, 11(21), 3627; https://doi.org/10.3390/electronics11213627 - 7 Nov 2022
Cited by 4 | Viewed by 1623
Abstract
A new technique for fiber faults events detection and monitoring in optical communication network systems is proposed. The fiber performance monitoring tool is a new proposed technique designed to detect, locate, and estimate the fiber faults without interrupting the data flow with efficient [...] Read more.
A new technique for fiber faults events detection and monitoring in optical communication network systems is proposed. The fiber performance monitoring tool is a new proposed technique designed to detect, locate, and estimate the fiber faults without interrupting the data flow with efficient costs and to improve the availability and reliability of optical networks as it detects fiber faults remotely in real time. Instead of the traditional old method, the new proposed FPMT uses an optical time domain reflectometer to detect multiple types of fiber failures, e.g., fiber breaks, fiber end face contamination, fiber end face burning, large insertion losses on the connector and interconnection, or mismatches between two different types of fiber cables. The proposed technique methodology to detect the fiber failures depends on analyzing the feedback of the reflected signal and the pattern shape of the reflected signal over network fiber lines, supports a higher range of distance testing and performance monitoring, and can be performed inside an optical network in real time and remotely by integrating with an OSC board. The proposed technique detects fiber faults with an average accuracy of measurement up to 99.8%, the maximum distance to detect fiber line faults is up to 150 km, and it can improve the system power budget with a minimal insertion loss of 0.4 dB. The superiority of the suggested technique over real networks was verified with success by the Huawei labs’ infrastructure nodes in the simulation experiment results. Full article
Show Figures

Figure 1

13 pages, 8430 KiB  
Article
MFFRand: Semantic Segmentation of Point Clouds Based on Multi-Scale Feature Fusion and Multi-Loss Supervision
by Zhiqing Miao, Shaojing Song, Pan Tang, Jian Chen, Jinyan Hu and Yumei Gong
Electronics 2022, 11(21), 3626; https://doi.org/10.3390/electronics11213626 - 7 Nov 2022
Cited by 3 | Viewed by 1554
Abstract
With the application of the random sampling method in the down-sampling of point clouds data, the processing speed of point clouds has been greatly improved. However, the utilization of semantic information is still insufficient. To address this problem, we propose a point cloud [...] Read more.
With the application of the random sampling method in the down-sampling of point clouds data, the processing speed of point clouds has been greatly improved. However, the utilization of semantic information is still insufficient. To address this problem, we propose a point cloud semantic segmentation network called MFFRand (Multi-Scale Feature Fusion Based on RandLA-Net). Based on RandLA-Net, a multi-scale feature fusion module is developed, which is stacked by encoder-decoders with different depths. The feature maps extracted by the multi-scale feature fusion module are continuously concatenated and fused. Furthermore, for the network to be trained better, the multi-loss supervision module is proposed, which could strengthen the control of the training process of the local structure by adding sub-losses in the end of different decoder structures. Moreover, the trained MFFRand network could be connected to the inference network by different decoder terminals separately, which could achieve the inference of different depths of the network. Compared to RandLA-Net, MFFRand has improved mIoU on both S3DIS and Semantic3D datasets, reaching 71.1% and 74.8%, respectively. Extensive experimental results on the point cloud dataset demonstrate the effectiveness of our method. Full article
Show Figures

Figure 1

29 pages, 16333 KiB  
Article
Research on Bidirectional Isolated Charging System Based on Resonant Converter
by Kai Zhou and Yue Sun
Electronics 2022, 11(21), 3625; https://doi.org/10.3390/electronics11213625 - 6 Nov 2022
Cited by 2 | Viewed by 1872
Abstract
This paper proposes a two-stage bidirectional isolated charging system, which can realize the bidirectional flow of electric energy, not only making the electric energy flow from the grid side to the battery side but also converting the battery’s energy into single-phase alternating current [...] Read more.
This paper proposes a two-stage bidirectional isolated charging system, which can realize the bidirectional flow of electric energy, not only making the electric energy flow from the grid side to the battery side but also converting the battery’s energy into single-phase alternating current to supply other electric equipment. The charging system also has the function of power factor correction and wide-range voltage output. The front stage of the charging system is a bidirectional totem pole structure power factor correction converter with a voltage and current double closed-loop control strategy to ensure the stability of the DC bus voltage, and the rear stage is a bidirectional CLLLC resonant converter, which adopts a high-frequency soft start strategy to reduce the inrush current during start-up and ensure the safe operation of the converter. Moreover, the frequency control strategy is used to make it have a wide range of DC output characteristics. In this paper, the principle analysis and parameter calculation of the charging system is carried out, the simulation platform and hardware circuit design are built, and a test prototype is piloted to verify the bidirectional operating characteristics of the charging system. The simulation and experimental results demonstrate the correctness of the theoretical analysis and topology design. Full article
(This article belongs to the Special Issue Energy Storage, Analysis and Battery Usage)
Show Figures

Figure 1

17 pages, 4838 KiB  
Article
Enhancing Sentiment Analysis via Random Majority Under-Sampling with Reduced Time Complexity for Classifying Tweet Reviews
by Saleh Naif Almuayqil, Mamoona Humayun, N. Z. Jhanjhi, Maram Fahaad Almufareh and Navid Ali Khan
Electronics 2022, 11(21), 3624; https://doi.org/10.3390/electronics11213624 - 6 Nov 2022
Cited by 5 | Viewed by 1719
Abstract
Twitter has become a unique platform for social interaction from people all around the world, leading to an extensive amount of knowledge that can be used for various reasons. People share and spread their own ideologies and point of views on unique topics [...] Read more.
Twitter has become a unique platform for social interaction from people all around the world, leading to an extensive amount of knowledge that can be used for various reasons. People share and spread their own ideologies and point of views on unique topics leading to the production of a lot of content. Sentiment analysis is of extreme importance to various businesses as it can directly impact their important decisions. Several challenges related to the research subject of sentiment analysis includes issues such as imbalanced dataset, lexical uniqueness, and processing time complexity. Most machine learning models are sequential: they need a considerable amount of time to complete execution. Therefore, we propose a model sentiment analysis specifically designed for imbalanced datasets that can reduce the time complexity of the task by using various text sequenced preprocessing techniques combined with random majority under-sampling. Our proposed model provides competitive results to other models while simultaneously reducing the time complexity for sentiment analysis. The results obtained after the experimentation corroborate that our model provides great results producing the accuracy of 86.5% and F1 score of 0.874 through XGB. Full article
Show Figures

Figure 1

12 pages, 4810 KiB  
Article
Classifying Conditions of Speckle and Wrinkle on the Human Face: A Deep Learning Approach
by Tsai-Rong Chang and Ming-Yen Tsai
Electronics 2022, 11(21), 3623; https://doi.org/10.3390/electronics11213623 - 6 Nov 2022
Cited by 3 | Viewed by 1890
Abstract
Speckles and wrinkles are common skin conditions on the face, with occurrence ranging from mild to severe, affecting an individual in various ways. In this study, we aim to detect these conditions using an intelligent deep learning approach. First, we applied a face [...] Read more.
Speckles and wrinkles are common skin conditions on the face, with occurrence ranging from mild to severe, affecting an individual in various ways. In this study, we aim to detect these conditions using an intelligent deep learning approach. First, we applied a face detection model and identified the face image using face positioning techniques. We then split the face into three polygonal areas (forehead, eyes, and cheeks) based on 81 position points. Skin conditions in the images were firstly judged by skin experts and subjectively classified into different categories, from good to bad. Wrinkles were classified into five categories, and speckles were classified into four categories. Next, data augmentation was performed using the following manipulations: changing the HSV hue, image rotation, and horizontal flipping of the original image, in order to facilitate deep learning using the Resnet models. We tested the training using these models each with a different number of layers: ResNet-18, ResNet-34, ResNet-50, ResNet-101, and ResNet-152. Finally, the K-fold (K = 10) cross-validation process was applied to obtain more rigorous results. Results of the classification are, in general, satisfactory. When compared across models and across skin features, we found that Resnet performance is generally better in terms of average classification accuracy when its architecture has more layers. Full article
(This article belongs to the Special Issue Advances of Future IoE Wireless Network Technology)
Show Figures

Figure 1

13 pages, 3656 KiB  
Article
Automatic Pavement Crack Detection Fusing Attention Mechanism
by Junhua Ren, Guowu Zhao, Yadong Ma, De Zhao, Tao Liu and Jun Yan
Electronics 2022, 11(21), 3622; https://doi.org/10.3390/electronics11213622 - 6 Nov 2022
Cited by 14 | Viewed by 1954
Abstract
Pavement cracks can result in the degradation of pavement performance. Due to the lack of timely inspection and reparation for the pavement cracks, with the development of cracks, the safety and service life of the pavement can be decreased. To curb the development [...] Read more.
Pavement cracks can result in the degradation of pavement performance. Due to the lack of timely inspection and reparation for the pavement cracks, with the development of cracks, the safety and service life of the pavement can be decreased. To curb the development of pavement cracks, detecting these cracks accurately plays an important role. In this paper, an automatic pavement crack detection method is proposed. For achieving real-time inspection, the YOLOV5 was selected as the base model. Due to the small size of the pavement cracks, the accuracy of most of the pavement crack deep learning-based methods cannot reach a high degree. To further improve the accuracy of those kind of methods, attention modules were employed. Based on the self-building datasets collected in Linyi city, the performance among various crack detection models was evaluated. The results showed that adding attention modules can effectively enhance the ability of crack detection. The precision of YOLOV5-CoordAtt reaches 95.27%. It was higher than other conventional and deep learning methods. According to the pictures of the results, the proposed methods can detect accurately under various situations. Full article
(This article belongs to the Special Issue Deep Learning Based Object Detection II)
Show Figures

Figure 1

15 pages, 2977 KiB  
Review
Use of Machine Learning in Air Pollution Research: A Bibliographic Perspective
by Shikha Jain, Navneet Kaur, Sahil Verma, Kavita, A. S. M. Sanwar Hosen and Satbir S Sehgal
Electronics 2022, 11(21), 3621; https://doi.org/10.3390/electronics11213621 - 6 Nov 2022
Cited by 4 | Viewed by 3020
Abstract
This research is an attempt to examine the recent status and development of scientific studies on the use of machine learning algorithms to model air pollution challenges. This study uses the Web of Science database as a primary search engine and covers over [...] Read more.
This research is an attempt to examine the recent status and development of scientific studies on the use of machine learning algorithms to model air pollution challenges. This study uses the Web of Science database as a primary search engine and covers over 900 highly peer-reviewed articles in the period 1990–2022. Papers published on these topics were evaluated using the VOSViewer and biblioshiny software to identify and visualize significant authors, key trends, nations, research publications, and journals working on these issues. The findings show that research grew exponentially after 2012. Based on the survey, “particulate matter” is the highly occurring keyword, followed by “prediction”. Papers published by Chinese researchers have garnered the most citations (2421), followed by papers published in the United States of America (2256), and England (722). This study assists scholars, professionals, and global policymakers in understanding the current status of the research contribution on “air pollution and machine learning” as well as identifying the relevant areas for future research. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

16 pages, 9076 KiB  
Article
A C/X/Ku/K-Band Precision Compact 6-Bit Digital Attenuator with Logic Control Circuits
by Jialong Zeng, Jiaxuan Li, Yang Yuan, Cheng Tan and Zhongjun Yu
Electronics 2022, 11(21), 3620; https://doi.org/10.3390/electronics11213620 - 6 Nov 2022
Cited by 2 | Viewed by 1699
Abstract
This paper proposes a C/X/Ku/K band 6-bit digital step attenuator (DSA) which employs a variety of improved attenuation cells to achieve a wide bandwidth, stable amplitude variation, stable phase variation, and small area. In this paper, the improved T-type, π-type, and switched-path type [...] Read more.
This paper proposes a C/X/Ku/K band 6-bit digital step attenuator (DSA) which employs a variety of improved attenuation cells to achieve a wide bandwidth, stable amplitude variation, stable phase variation, and small area. In this paper, the improved T-type, π-type, and switched-path type topologies are analyzed theoretically and applied to different attenuation values to achieve the optimal attenuator performance. In addition, in order to reduce the complexity and to improve the stability of the overall radar system, the logic control circuit is integrated in the DSA chip in this paper. Finally, the proposed attenuator is implemented in 0.15μm GaAs technology, which has a maximum attenuation range of 31.5 dB with 0.5 dB steps. The proposed DSA exhibits a root-mean-square (RMS) attenuation error of less than 0.15 dB and an RMS phase error of less than 3°, at 4–24 GHz. The insertion loss (IL) and the area of the DSA are 4.3–4.5 dB and 1.5 mm × 0.4 mm, respectively. Benefiting from the improvements of the attenuation cells and the characteristic of GaAs technology with strong resistance to radiation and power processing capability, the proposed DSA is suitable for spaceborne radar systems. Full article
(This article belongs to the Special Issue Advanced RF, Microwave, and Millimeter-Wave Circuits and Systems)
Show Figures

Figure 1

19 pages, 6994 KiB  
Article
Comparative Performance Analysis of Vibration Prediction Using RNN Techniques
by Ju-Hyung Lee and Jun-Ki Hong
Electronics 2022, 11(21), 3619; https://doi.org/10.3390/electronics11213619 - 6 Nov 2022
Cited by 7 | Viewed by 1761
Abstract
Drones are increasingly used in several industries, including rescue, firefighting, and agriculture. If the motor connected to a drone’s propeller is damaged, there is a risk of a drone crash. Therefore, to prevent such incidents, an accurate and quick prediction tool of the [...] Read more.
Drones are increasingly used in several industries, including rescue, firefighting, and agriculture. If the motor connected to a drone’s propeller is damaged, there is a risk of a drone crash. Therefore, to prevent such incidents, an accurate and quick prediction tool of the motor vibrations in drones is required. In this study, normal and abnormal vibration data were collected from the motor connected to the propeller of a drone. The period and amplitude of the vibrations are consistent in normal vibrations, whereas they are irregular in abnormal vibrations. The collected vibration data were used to train six recurrent neural network (RNN) techniques: long short-term memory (LSTM), attention-LSTM (Attn.-LSTM), bidirectional-LSTM (Bi-LSTM), gated recurrent unit (GRU), attention-GRU (Attn.-GRU), and bidirectional GRU (Bi-GRU). Then, the simulation runtime it took for each RNN technique to predict the vibrations and the accuracy of the predicted vibrations were analyzed to compare the performances of the RNN model. Based on the simulation results, the Attn.-LSTM and Attn.-GRU techniques, incorporating the attention mechanism, had the best efficiency compared to the conventional LSTM and GRU techniques, respectively. The attention mechanism calculates the similarity between the input value and the to-be-predicted value in advance and reflects the similarity in the prediction. Full article
Show Figures

Figure 1

13 pages, 3905 KiB  
Article
Detection and Classification of Tomato Crop Disease Using Convolutional Neural Network
by Gnanavel Sakkarvarthi, Godfrey Winster Sathianesan, Vetri Selvan Murugan, Avulapalli Jayaram Reddy, Prabhu Jayagopal and Mahmoud Elsisi
Electronics 2022, 11(21), 3618; https://doi.org/10.3390/electronics11213618 - 6 Nov 2022
Cited by 36 | Viewed by 6812
Abstract
Deep learning is a cutting-edge image processing method that is still relatively new but produces reliable results. Leaf disease detection and categorization employ a variety of deep learning approaches. Tomatoes are one of the most popular vegetables and can be found in every [...] Read more.
Deep learning is a cutting-edge image processing method that is still relatively new but produces reliable results. Leaf disease detection and categorization employ a variety of deep learning approaches. Tomatoes are one of the most popular vegetables and can be found in every kitchen in various forms, no matter the cuisine. After potato and sweet potato, it is the third most widely produced crop. The second-largest tomato grower in the world is India. However, many diseases affect the quality and quantity of tomato crops. This article discusses a deep-learning-based strategy for crop disease detection. A Convolutional-Neural-Network-based technique is used for disease detection and classification. Inside the model, two convolutional and two pooling layers are used. The results of the experiments show that the proposed model outperformed pre-trained InceptionV3, ResNet 152, and VGG19. The CNN model achieved 98% training accuracy and 88.17% testing accuracy. Full article
(This article belongs to the Special Issue Reliable Industry 4.0 Based on Machine Learning and IoT)
Show Figures

Figure 1

16 pages, 3480 KiB  
Article
SEMRAchain: A Secure Electronic Medical Record Based on Blockchain Technology
by Halima Mhamdi, Manel Ayadi, Amel Ksibi, Amal Al-Rasheed, Ben Othman Soufiene and Sakli Hedi
Electronics 2022, 11(21), 3617; https://doi.org/10.3390/electronics11213617 - 6 Nov 2022
Cited by 13 | Viewed by 2753
Abstract
A medical record is an important part of a patient’s follow-up. It comprises healthcare professionals’ views, prescriptions, analyses, and all information about the patient. Several players, including the patient, the doctor, and the pharmacist, are involved in the process of sharing, and managing [...] Read more.
A medical record is an important part of a patient’s follow-up. It comprises healthcare professionals’ views, prescriptions, analyses, and all information about the patient. Several players, including the patient, the doctor, and the pharmacist, are involved in the process of sharing, and managing this file. Any authorized individual can access the electronic medical record (EMR) from anywhere, and the data are shared among various health service providers. Sharing the EMR requires various conditions, such as security and confidentiality. However, existing medical systems may be exposed to system failure and malicious intrusions, making it difficult to deliver dependable services. Additionally, the features of these systems represent a challenge for centralized access control methods. This paper presents SEMRAchain a system based on Access control (Role-Based Access Control (RBAC), Attribute-Based Access Control (ABAC)) and a smart contract approach. This fusion enables decentralized, fine-grained, and dynamic access control management for EMR management. Together, blockchain technology as a secure distributed ledger and access control provides such a solution, providing system stakeholders with not just visibility but also trustworthiness, credibility, and immutability. Full article
(This article belongs to the Special Issue Security and Privacy in Blockchain/IoT)
Show Figures

Figure 1

16 pages, 5003 KiB  
Article
A Method for Predicting the Remaining Life of Rolling Bearings Based on Multi-Scale Feature Extraction and Attention Mechanism
by Changhong Jiang, Xinyu Liu, Yizheng Liu, Mujun Xie, Chao Liang and Qiming Wang
Electronics 2022, 11(21), 3616; https://doi.org/10.3390/electronics11213616 - 5 Nov 2022
Cited by 5 | Viewed by 1517
Abstract
In response to the problems of difficult identification of degradation stage start points and inadequate extraction of degradation features in the current rolling bearing remaining life prediction method, a rolling bearing remaining life prediction method based on multi-scale feature extraction and attention mechanism [...] Read more.
In response to the problems of difficult identification of degradation stage start points and inadequate extraction of degradation features in the current rolling bearing remaining life prediction method, a rolling bearing remaining life prediction method based on multi-scale feature extraction and attention mechanism is proposed. Firstly, this paper takes the normalized bearing vibration signal as input and adopts a quadratic function as the RUL prediction label, avoiding identifying the degradation stage start point. Secondly, the spatial and temporal features of the bearing vibration signal are extracted using the dilated convolutional neural network and LSTM network, respectively, and the channel attention mechanism is used to assign weights to each degradation feature to effectively use multi-scale information. Finally, the mapping of bearing degradation features to remaining life labels is achieved through a fully connected layer for the RUL prediction of bearings. The proposed method is validated using the PHM 2012 Challenge bearing dataset, and the experimental results show that the predictive performance of the proposed method is superior to that of other RUL prediction methods. Full article
Show Figures

Figure 1

22 pages, 993 KiB  
Review
Review of Obstacle Detection Systems for Collision Avoidance of Autonomous Underwater Vehicles Tested in a Real Environment
by Rafał Kot
Electronics 2022, 11(21), 3615; https://doi.org/10.3390/electronics11213615 - 5 Nov 2022
Cited by 11 | Viewed by 3939
Abstract
The high efficiency of obstacle detection system (ODS) is essential to obtain the high performance of autonomous underwater vehicles (AUVs) carrying out a mission in a complex underwater environment. Based on the previous literature analysis, that include path planning and collision avoidance algorithms, [...] Read more.
The high efficiency of obstacle detection system (ODS) is essential to obtain the high performance of autonomous underwater vehicles (AUVs) carrying out a mission in a complex underwater environment. Based on the previous literature analysis, that include path planning and collision avoidance algorithms, the solutions which operation was confirmed by tests in a real-world environment were selected for this paper consideration. These studies were subjected to a deeper analysis assessing the effectiveness of the obstacle detection algorithms. The analysis shows that over the years, ODSs being improved and provide greater detection accuracy that results in better AUV response time. Almost all analysed methods are based on the conventional approach to obstacle detection. In the future, even better ODSs parameters could be achieved by using artificial intelligence (AI) methods. Full article
Show Figures

Figure 1

11 pages, 3776 KiB  
Article
Artificial Neural Network-Based Abnormal Gait Pattern Classification Using Smart Shoes with a Gyro Sensor
by Kimin Jeong and Kyung-Chang Lee
Electronics 2022, 11(21), 3614; https://doi.org/10.3390/electronics11213614 - 5 Nov 2022
Cited by 2 | Viewed by 2776
Abstract
Recently, as a wearable-sensor-based approach, a smart insole device has been used to analyze gait patterns. By adding a small low-power sensor and an IoT device to the smart insole, it is possible to monitor human activity, gait pattern, and plantar pressure in [...] Read more.
Recently, as a wearable-sensor-based approach, a smart insole device has been used to analyze gait patterns. By adding a small low-power sensor and an IoT device to the smart insole, it is possible to monitor human activity, gait pattern, and plantar pressure in real time and evaluate exercise function in an uncontrolled environment. The sensor-embedded smart soles prevent any feeling of heterogeneity, and WiFi technology allows acquisition of data even when the user is not in a laboratory environment. In this study, we designed a sensor data-collection module that uses a miniaturized low-power accelerometer and gyro sensor, and then embedded it in a shoe to collect gait data. The gait data are sent to the gait-pattern classification module via a Wi-Fi network, and the ANN model classifies the gait into gait patterns such as in-toeing gait, normal gait, or out-toeing gait. Finally, the feasibility of our model was confirmed through several experiments. Full article
Show Figures

Figure 1

14 pages, 1380 KiB  
Article
Assessing Users’ Behavior on the Adoption of Digital Technologies in Management and Accounting Information Systems
by Anca Antoaneta Vărzaru, Claudiu George Bocean, Mădălina Giorgiana Mangra and Dalia Simion
Electronics 2022, 11(21), 3613; https://doi.org/10.3390/electronics11213613 - 5 Nov 2022
Cited by 1 | Viewed by 2793
Abstract
The exponential trend of digital technologies, doubled by the mobility restrictions imposed during the COVID-19 pandemic, caused a paradigm shift in traditional economic models. Digital transformation has become increasingly common in all types of organizations and affects all activities. Organizations have adopted digital [...] Read more.
The exponential trend of digital technologies, doubled by the mobility restrictions imposed during the COVID-19 pandemic, caused a paradigm shift in traditional economic models. Digital transformation has become increasingly common in all types of organizations and affects all activities. Organizations have adopted digital technologies to increase efficiency and effectiveness in management, marketing, and accounting. This paper aims to assess the impact of digital transformation on project management, marketing, and decision-making processes in users’ perceptions. The study begins with theoretical research on the digitalization of management and accounting information systems and conducts an empirical investigation based on a questionnaire. First, the paper assesses users’ perceptions of implementing digital technologies. The answers of 442 professionals from project management, marketing, and decision making were processed using structural equation modeling. The results show that users’ acceptance of digitalization is higher in decision making due to the significant contribution of artificial intelligence in repetitive decision making. Project management and marketing also benefit from digitalization, yet non-repetitive activities remain mainly the responsibility of the human factor. Full article
Show Figures

Figure 1

13 pages, 2145 KiB  
Article
On the Optimization of Machine Learning Techniques for Chaotic Time Series Prediction
by Astrid Maritza González-Zapata, Esteban Tlelo-Cuautle and Israel Cruz-Vega
Electronics 2022, 11(21), 3612; https://doi.org/10.3390/electronics11213612 - 5 Nov 2022
Cited by 6 | Viewed by 1955
Abstract
Interest in chaotic time series prediction has grown in recent years due to its multiple applications in fields such as climate and health. In this work, we summarize the contribution of multiple works that use different machine learning (ML) methods to predict chaotic [...] Read more.
Interest in chaotic time series prediction has grown in recent years due to its multiple applications in fields such as climate and health. In this work, we summarize the contribution of multiple works that use different machine learning (ML) methods to predict chaotic time series. It is highlighted that the challenge is predicting the larger horizon with low error, and for this task, the majority of authors use datasets generated by chaotic systems such as Lorenz, Rössler and Mackey–Glass. Among the classification and description of different machine learning methods, this work takes as a case study the Echo State Network (ESN) to show that its optimization can lead to enhance the prediction horizon of chaotic time series. Different optimization methods applied to different machine learning ones are given to appreciate that metaheuristics are a good option to optimize an ESN. In this manner, an ESN in closed-loop mode is optimized herein by applying Particle Swarm Optimization. The prediction results of the optimized ESN show an increase of about twice the number of steps ahead, thus highlighting the usefulness of performing an optimization to the hyperparameters of an ML method to increase the prediction horizon. Full article
(This article belongs to the Special Issue Convolutional Neural Networks and Vision Applications, Volume II)
Show Figures

Figure 1

Previous Issue
Back to TopTop