17 pages, 3049 KiB  
Article
An Event Matching Energy Disaggregation Algorithm Using Smart Meter Data
by Rehan Liaqat * and Intisar Ali Sajjad *
Department of Electrical Engineering, University of Engineering and Technology Taxila, Taxila 47050, Pakistan
Electronics 2022, 11(21), 3596; https://doi.org/10.3390/electronics11213596 - 3 Nov 2022
Cited by 10 | Viewed by 2787
Abstract
Energy disaggregation algorithms disintegrate aggregate demand into appliance-level demands. Among various energy disaggregation approaches, non-intrusive load monitoring (NILM) algorithms requiring a single sensor have gained much attention in recent years. Various machine learning and optimization-based NILM approaches are available in the literature, but [...] Read more.
Energy disaggregation algorithms disintegrate aggregate demand into appliance-level demands. Among various energy disaggregation approaches, non-intrusive load monitoring (NILM) algorithms requiring a single sensor have gained much attention in recent years. Various machine learning and optimization-based NILM approaches are available in the literature, but bulk training data and high computational time are their respective drawbacks. Considering these drawbacks, we devised an event matching energy disaggregation algorithm (EMEDA) for NILM of multistate household appliances using smart meter data. Having limited training data, K-means clustering was employed to estimate appliance power states. These power states were accumulated to generate an event database (EVD) containing all combinations of appliance operations in their various states. Prior to matching, the test samples of aggregate demand events were decreased by event-driven data compression for computational effectiveness. The compressed test events were matched in the sorted EVD to assess the contribution of each appliance in the aggregate demand. To counter the effects of transient spikes and/or dips that occurred during the state transition of appliances, a post-processing algorithm was also developed. The proposed approach was validated using the low-rate data of the Reference Energy Disaggregation Dataset (REDD). With better energy disaggregation performance, the proposed EMEDA exhibited reductions of 97.5 and 61.7% in computational time compared with the recent smart event-based optimization and optimization-based load disaggregation approaches, respectively. Full article
Show Figures

Figure 1

18 pages, 4778 KiB  
Article
Research on the Determination Method of Aircraft Flight Safety Boundaries Based on Adaptive Control
by Miaosen Wang, Yuan Xue * and Kang Wang
Aeronautics Engineering College, Air Force Engineering University, Xi’an 710038, China
Electronics 2022, 11(21), 3595; https://doi.org/10.3390/electronics11213595 - 3 Nov 2022
Cited by 1 | Viewed by 2168
Abstract
Icing is one of the main external environmental factors causing loss of control (LOC) in aircraft. To ensure safe flying in icy conditions, modern large aircraft are all fitted with anti-icing systems. Although aircraft anti-icing technology is becoming more sophisticated as research continues [...] Read more.
Icing is one of the main external environmental factors causing loss of control (LOC) in aircraft. To ensure safe flying in icy conditions, modern large aircraft are all fitted with anti-icing systems. Although aircraft anti-icing technology is becoming more sophisticated as research continues to expand and deepen, the scope of protection provided by anti-icing systems based on existing anti-icing technology is still relatively limited, and in practice, it is difficult to avoid flying with ice even when the anti-icing system is switched on. Therefore, it is necessary to consider providing additional safety strategies in addition to the anti-icing system, i.e., to consider icing safety from the aerodynamic, stability, and control points of view during the aircraft design phase, and to build a complete ice-tolerant protection system combining aerodynamic design methods, flight control strategies and implementation equipment. Based on the modern control theory of adaptive control, this paper presents a new method of envelope protection in icing situations based on a case study of icing, which has the advantages of strong real-time performance and good robustness, and has high engineering application value. Full article
(This article belongs to the Section Systems & Control Engineering)
Show Figures

Figure 1

35 pages, 43569 KiB  
Article
Deep Learning Approach for Automatic Segmentation and Functional Assessment of LV in Cardiac MRI
by Anupama Bhan 1,*, Parthasarathi Mangipudi 2 and Ayush Goyal 3,*
1 Department of ECE, Amity School of Engineering and Technology, Amity University, Noida 201303, Uttar Pradesh, India
2 Department of CSE, Amity School of Engineering and Technology, Amity University, Noida 201303, Uttar Pradesh, India
3 Department of Electrical Engineering and Computer Science, Texas A&M University-Kingsville, Kingsville, TX 78363, USA
Electronics 2022, 11(21), 3594; https://doi.org/10.3390/electronics11213594 - 3 Nov 2022
Cited by 6 | Viewed by 2652
Abstract
The early diagnosis of cardiovascular diseases (CVDs) can effectively prevent them from worsening. The source of the disease can be effectively detected through analysis with cardiac magnetic resonance imaging (CMRI). The segmentation of the left ventricle (LV) in CMRI images plays an indispensable [...] Read more.
The early diagnosis of cardiovascular diseases (CVDs) can effectively prevent them from worsening. The source of the disease can be effectively detected through analysis with cardiac magnetic resonance imaging (CMRI). The segmentation of the left ventricle (LV) in CMRI images plays an indispensable role in the diagnosis of CVDs. However, the automated segmentation of LV is a challenging task, as it is confused with neighboring regions in the cardiac MRI. Deep learning models are effective in performing such complex segmentation because of the high performing convolutional neural networks (CNN). However, since segmentation using CNN involves the pixel-level classification of the image, it lacks the contextual information that is highly desirable in analyzing medical images. In this research, we propose a modified U-Net model to accurately segment the LV using context-enabled segmentation. The proposed model achieves the automatic segmentation and quantitative assessment of LV. The proposed model achieves the state-of-the-art accuracy by effectively utilizing various hyperparameters, such as batch size, batch normalization, activation function, loss function and dropout. Our method demonstrated a statistical significance in the endo- and epicardial walls with a dice score of 0.96 and 0.93, respectively, an average perpendicular distance of 1.73 and percentage of good contours of 96.22 were achieved. Furthermore, a high positive correlation of 0.98 between the clinical parameters, such as ejection fraction, end diastolic volume (EDV), end systolic volume (ESV) and gold standard was obtained. Full article
(This article belongs to the Special Issue Medical Image Processing Using AI)
Show Figures

Figure 1

15 pages, 693 KiB  
Review
Overview of Machine Learning and Deep Learning Approaches for Detecting Shockable Rhythms in AED in the Absence or Presence of CPR
by Kamana Dahal and Mohd. Hasan Ali *
Department of Electrical and Computer Engineering, The University of Memphis, Memphis, TN 38152, USA
Electronics 2022, 11(21), 3593; https://doi.org/10.3390/electronics11213593 - 3 Nov 2022
Cited by 6 | Viewed by 6914
Abstract
Sudden Cardiac Arrest (SCA) is one of the leading causes of death worldwide. Therefore, timely and accurate detection of such arrests and immediate defibrillation support for the victim is critical. An automated external defibrillator (AED) is a medical device that diagnoses the rhythms [...] Read more.
Sudden Cardiac Arrest (SCA) is one of the leading causes of death worldwide. Therefore, timely and accurate detection of such arrests and immediate defibrillation support for the victim is critical. An automated external defibrillator (AED) is a medical device that diagnoses the rhythms and provides electric shocks to SCA patients to restore normal heart rhythms. Machine learning and deep learning-based approaches are popular in AEDs for detecting shockable rhythms and automating defibrillation. There are some works in the literature for reviewing various machine learning (ML) and deep learning (DL) algorithms for shockable ECG signals in AED. Starting in 2017 and beyond, different DL algorithms were proposed for the AED. This paper provides an overview of AED, including its circuit diagram and application to SCA patients. It also presents the most up-to-date ML and DL approaches for detecting shockable rhythms in AEDs without cardiopulmonary resuscitation (CPR) or during CPR. It also provides a performance comparison of these approaches and discusses other researchers’ results that lay the foundation for researchers to delve in-depth. Furthermore, the research gaps and recommendations for future research provided in this review paper will be helpful to the researchers, scientists, and engineers in conducting further research in this critical field. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

17 pages, 1195 KiB  
Article
Kernel Density Estimation and Convolutional Neural Networks for the Recognition of Multi-Font Numbered Musical Notation
by Qi Wang 1, Li Zhou 2,* and Xin Chen 1
1 School of Automation, China University of Geosciences, Wuhan 430074, China
2 School of Arts and Communication, China University of Geosciences, Wuhan 430074, China
Electronics 2022, 11(21), 3592; https://doi.org/10.3390/electronics11213592 - 3 Nov 2022
Cited by 4 | Viewed by 2427
Abstract
Optical music recognition (OMR) refers to converting musical scores into digitized information using electronics. In recent years, few types of OMR research have involved numbered musical notation (NMN). The existing NMN recognition algorithm is difficult to deal with because the numbered notation font [...] Read more.
Optical music recognition (OMR) refers to converting musical scores into digitized information using electronics. In recent years, few types of OMR research have involved numbered musical notation (NMN). The existing NMN recognition algorithm is difficult to deal with because the numbered notation font is changing. In this paper, we made a multi-font NMN dataset. Using the presented dataset, we use kernel density estimation with proposed bar line criteria to measure the relative height of symbols, and an accurate separation of melody lines and lyrics lines in musical notation is achieved. Furthermore, we develop a structurally improved convolutional neural network (CNN) to classify the symbols in melody lines. The proposed neural network performs hierarchical processing of melody lines according to the symbol arrangement rules of NMN and contains three parallel small CNNs called Arcnet, Notenet and Linenet. Each of them adds a spatial pyramid pooling layer to adapt to the diversity of symbol sizes and styles. The experimental results show that our algorithm can accurately detect melody lines. Taking the average accuracy rate of identifying various symbols as the recognition rate, the improved neural networks reach a recognition rate of 95.5%, which is 8.5% higher than the traditional convolutional neural networks. Through audio comparison and evaluation experiments, we find that the generated audio maintains a high similarity to the original audio of the NMN. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

12 pages, 3520 KiB  
Article
Genetic Algorithm for the Optimization of a Building Power Consumption Prediction Model
by Seungmin Oh 1, Junchul Yoon 2, Yoona Choi 3, Young-Ae Jung 4,* and Jinsul Kim 1,*
1 Department of ICT Convergence System Engineering, Chonnam National University, 77, Yongbong-ro, Buk-gu, Gwangju 500757, Korea
2 Korea Electric Power Corporation (KEPCO), 55, Jeollyeok-ro, Naju 58322, Korea
3 Korea Electric Power Research Institute, 105, Munji-ro, Yuseong-ku, Daejeon 34056, Korea
4 Division of Information Technology Education, Sunmoon University, Asan 31460, Korea
Electronics 2022, 11(21), 3591; https://doi.org/10.3390/electronics11213591 - 3 Nov 2022
Cited by 14 | Viewed by 3447
Abstract
Accurately predicting power consumption is essential to ensure a safe power supply. Various technologies have been studied to predict power consumption, but the prediction of power consumption using deep learning models has been quite successful. However, in order to predict power consumption by [...] Read more.
Accurately predicting power consumption is essential to ensure a safe power supply. Various technologies have been studied to predict power consumption, but the prediction of power consumption using deep learning models has been quite successful. However, in order to predict power consumption by utilizing deep learning models, it is necessary to find an appropriate set of hyper-parameters. This introduces the problem of complexity and wide search areas. The power consumption field should be accurately predicted in various distributed areas. To this end, a customized consumption prediction deep learning model is needed, which is essential for optimizing the hyper-parameters that are suitable for the environment. However, typical deep learning model users lack the knowledge needed to find the optimal values of parameters. To solve this problem, we propose a method for finding the optimal values of parameters for learning. In addition, the layer parameters of deep learning models are optimized by applying genetic algorithms. In this paper, we propose a hyper-parameter optimization method that solves the time and cost problems that depend on existing methods or experiences. We derive a hyper-parameter optimization plan that solves the existing method or experience-dependent time and cost problems. As a result, the RNN model achieved a 30% and 21% better mean squared error and mean absolute error, respectively, than did the arbitrary deep learning model, and the LSTM model was able to achieve 9% and 5% higher performance. Full article
(This article belongs to the Special Issue Applied AI-Based Platform Technology and Application)
Show Figures

Figure 1

13 pages, 7265 KiB  
Article
A 3.2 GHz Injection-Locked Ring Oscillator-Based Phase-Locked-Loop for Clock Recovery
by Dorian Vert 1,*, Michel Pignol 2, Vincent Lebre 3, Emmanuel Moutaye 3, Florence Malou 2 and Jean-Baptiste Begueret 1
1 IMS Laboratory, University of Bordeaux, 33400 Talence, France
2 Centre National d’Etudes Spatiales, 31000 Toulouse, France
3 Thales Alenia Space, 31000 Toulouse, France
Electronics 2022, 11(21), 3590; https://doi.org/10.3390/electronics11213590 - 3 Nov 2022
Cited by 3 | Viewed by 3442
Abstract
An injection-locked ring oscillator-based phase-locked-loop targeting clock recovery for space application at 3.2 GHz is presented here. Most clock recovery circuits need a very low phase noise and jitter performance and are thus based on LC-type oscillators. These excellent performances come at the [...] Read more.
An injection-locked ring oscillator-based phase-locked-loop targeting clock recovery for space application at 3.2 GHz is presented here. Most clock recovery circuits need a very low phase noise and jitter performance and are thus based on LC-type oscillators. These excellent performances come at the expense of a very poor integration density. To alleviate this issue, this work introduces an injection-locked ring oscillator-based PLL circuit. The combination of the injection-locking process with the use of ring oscillators allows for the benefit of excellent jitter performance while presenting an extremely low surface area due to an architecture without any inductor. The injection locking principle is addressed, and evidence of its phase noise and jitter improvements are confirmed through measurement results. Indeed, phase noise and jitter enhancements up to 43 dB and 23.3 mUI, respectively, were measured. As intended, this work shows the best integration density compared to recent similar state-of-the-art studies. The whole architecture measures 0.1 mm2 while consuming 34.6 mW in a low-cost 180 nm CMOS technology. Full article
(This article belongs to the Special Issue Recent Advances in Silicon-Based RFIC Design)
Show Figures

Figure 1

11 pages, 483 KiB  
Article
A Review of the Gate-All-Around Nanosheet FET Process Opportunities
by Sagarika Mukesh †,‡ and Jingyun Zhang *,‡
1 IBM Research Albany, Albany, NY 12203, USA
Current address: 257 Fuller Road, Suite 3100, Albany, NY 12203, USA.
These authors contributed equally to this work.
Electronics 2022, 11(21), 3589; https://doi.org/10.3390/electronics11213589 - 3 Nov 2022
Cited by 50 | Viewed by 45129
Abstract
In this paper, the innovations in device design of the gate-all-around (GAA) nanosheet FET are reviewed. These innovations span enablement of multiple threshold voltages and bottom dielectric isolation in addition to impact of channel geometry on the overall device performance. Current scaling challenges [...] Read more.
In this paper, the innovations in device design of the gate-all-around (GAA) nanosheet FET are reviewed. These innovations span enablement of multiple threshold voltages and bottom dielectric isolation in addition to impact of channel geometry on the overall device performance. Current scaling challenges for GAA nanosheet FETs are reviewed and discussed. Finally, an analysis of future innovations required to continue scaling nanosheet FETs and future technologies is discussed. Full article
(This article belongs to the Special Issue Advanced CMOS Devices and Applications)
Show Figures

Figure 1

21 pages, 2507 KiB  
Article
A Hybrid Model to Predict Stock Closing Price Using Novel Features and a Fully Modified Hodrick–Prescott Filter
by Qazi Mudassar Ilyas 1,2, Khalid Iqbal 1,3,*, Sidra Ijaz 1,3, Abid Mehmood 1,4 and Surbhi Bhatia 1,2,*
1 The Saudi Investment Bank Chair for Investment Awareness Studies, The Deanship of Scientific Research, The Vice Presidency for Graduate Studies and Scientific Research, King Faisal University, Al Ahsa 31982, Saudi Arabia
2 Department of Information Systems, College of Computer Sciences and Information Technology, King Faisal University, Al Ahsa 31982, Saudi Arabia
3 Department of Computer Science, COMSATS University Islamabad, Attock Campus, Attock 43600, Pakistan
4 Department of Management Information Systems, College of Business Administration, King Faisal University, Al Ahsa 31982, Saudi Arabia
Electronics 2022, 11(21), 3588; https://doi.org/10.3390/electronics11213588 - 3 Nov 2022
Cited by 15 | Viewed by 6083
Abstract
Forecasting stock market prices is an exciting knowledge area for investors and traders. Successful predictions lead to high financial revenues and prevent investors from market risks. This paper proposes a novel hybrid stock prediction model that improves prediction accuracy. The proposed method consists [...] Read more.
Forecasting stock market prices is an exciting knowledge area for investors and traders. Successful predictions lead to high financial revenues and prevent investors from market risks. This paper proposes a novel hybrid stock prediction model that improves prediction accuracy. The proposed method consists of three main components, a noise-filtering technique, novel features, and machine learning-based prediction. We used a fully modified Hodrick–Prescott filter to smooth the historical stock price data by removing the cyclic component from the time series. We propose several new features for stock price prediction, including the return of firm, return open price, return close price, change in return open price, change in return close price, and volume per total. We investigate traditional and deep machine learning approaches for prediction. Support vector regression, auto-regressive integrated moving averages, and random forests are used for conventional machine learning. Deep learning techniques comprise long short-term memory and gated recurrent units. We performed several experiments with these machine learning algorithms. Our best model achieved a prediction accuracy of 70.88%, a root-mean-square error of 0.04, and an error rate of 0.1. Full article
(This article belongs to the Special Issue Artificial Intelligence Technologies and Applications)
Show Figures

Figure 1

11 pages, 3472 KiB  
Article
Design of Automatic Correction System for UAV’s Smoke Trajectory Angle Based on KNN Algorithm
by Pao-Yuan Chao *, Wei-Chih Hsu and Wei-You Chen
Department of Computer and Communication Engineering, National Kaohsiung University of Science and Technology (NKUST), Kaohsiung 807618, Taiwan
Electronics 2022, 11(21), 3587; https://doi.org/10.3390/electronics11213587 - 3 Nov 2022
Cited by 1 | Viewed by 1632
Abstract
Unmanned aerial vehicles (UAVs) have evolved with the progress of science and technology in recent years. They combine high-tech, such as information and communications technology, mechanical power, remote control, and electric power storage. In the past, drones could be flown only via remote [...] Read more.
Unmanned aerial vehicles (UAVs) have evolved with the progress of science and technology in recent years. They combine high-tech, such as information and communications technology, mechanical power, remote control, and electric power storage. In the past, drones could be flown only via remote control, and the mounted cameras captured images from the air. Now, UAVs integrate new technologies such as 5G, AI, and IoT in Taiwan. They have a great application value in a high-altitude data acquisition, entertainment performances (such as night light shows and UAV shows with smoke), agriculture, and 3D modeling. UAVs are susceptible to the natural wind when spraying smoke into the air, which leads to a smoke track offset. This study developed an autocorrect system for UAV smoke tracing. An AI model was used to calculate smoke tube angle corrections so that smoke tube angles could be immediately corrected when smoke is sprayed. This led to smoke tracks being consistent with flight tracks. Full article
(This article belongs to the Special Issue Knowledge Engineering and Data Mining)
Show Figures

Figure 1

25 pages, 9487 KiB  
Article
Switching Trackers for Effective Sensor Fusion in Advanced Driver Assistance Systems
by Ankur Deo 1,2 and Vasile Palade 2,*
1 Department of Autonomous Driving, KPIT Technologies, Pune 411057, India
2 Centre for Computer Science and Mathematical Modelling, Coventry University, Priory Road, Coventry CV1 5FB, UK
Electronics 2022, 11(21), 3586; https://doi.org/10.3390/electronics11213586 - 3 Nov 2022
Cited by 1 | Viewed by 2259
Abstract
Modern cars utilise Advanced Driver Assistance Systems (ADAS) in several ways. In ADAS, the use of multiple sensors to gauge the environment surrounding the ego-vehicle offers numerous advantages, as fusing information from more than one sensor helps to provide highly reliable and error-free [...] Read more.
Modern cars utilise Advanced Driver Assistance Systems (ADAS) in several ways. In ADAS, the use of multiple sensors to gauge the environment surrounding the ego-vehicle offers numerous advantages, as fusing information from more than one sensor helps to provide highly reliable and error-free data. The fused data is typically then fed to a tracker algorithm, which helps to reduce noise and compensate for situations when received sensor data is temporarily absent or spurious, or to counter the offhand false positives and negatives. The performances of these constituent algorithms vary vastly under different scenarios. In this paper, we focus on the variation in the performance of tracker algorithms in sensor fusion due to the alteration in external conditions in different scenarios, and on the methods for countering that variation. We introduce a sensor fusion architecture, where the tracking algorithm is spontaneously switched to achieve the utmost performance under all scenarios. By employing a Real-time Traffic Density Estimation (RTDE) technique, we may understand whether the ego-vehicle is currently in dense or sparse traffic conditions. A highly dense traffic (or congested traffic) condition would mean that external circumstances are non-linear; similarly, sparse traffic conditions would mean that the probability of linear external conditions would be higher. We also employ a Traffic Sign Recognition (TSR) algorithm, which is able to monitor for construction zones, junctions, schools, and pedestrian crossings, thereby identifying areas which have a high probability of spontaneous, on-road occurrences. Based on the results received from the RTDE and TSR algorithms, we construct a logic which switches the tracker of the fusion architecture between an Extended Kalman Filter (for linear external scenarios) and an Unscented Kalman Filter (for non-linear scenarios). This ensures that the fusion model always uses the tracker that is best suited for its current needs, thereby yielding consistent accuracy across multiple external scenarios, compared to the fusion models that employ a fixed single tracker. Full article
Show Figures

Figure 1

24 pages, 6758 KiB  
Article
A Dimension Estimation Method for Rigid and Flexible Planar Antennas Based on Characteristic Mode Analysis
by Bashar Bahaa Qas Elias 1,2,*, Azremi Abdullah Al-Hadi 1, Prayoot Akkaraekthalin 3 and Ping Jack Soh 4
1 Advanced Communication Engineering (ACE) CoE, Faculty of Electronic Engineering Technology, Pauh Putra Campus, Universiti Malaysia Perlis (UniMAP), Arau 02600, Perlis, Malaysia
2 Department of Communications Technology Engineering, College of Information Technology, Imam Ja’afar Al-Sadiq University, Baghdad 10052, Iraq
3 Department of Electrical and Computer Engineering, Faculty of Engineering, King Mongkut’s University of Technology North Bangkok (KMUTNB), 1518 Pracharat 1 Rd., Wongsawang, Bangsue, Bangkok 10800, Thailand
4 Centre for Wireless Communications (CWC), University of Oulu, P.O. Box 4500, 90014 Oulu, Finland
Electronics 2022, 11(21), 3585; https://doi.org/10.3390/electronics11213585 - 2 Nov 2022
Cited by 1 | Viewed by 1857
Abstract
An empirical method for simplified dimension estimation of patch antennas is proposed in this work based on characteristic mode analysis (CMA). This method involves generating formulae to calculate substrate-independent antenna patch widths produced from the antenna’s characteristic angle. This enables the definition of [...] Read more.
An empirical method for simplified dimension estimation of patch antennas is proposed in this work based on characteristic mode analysis (CMA). This method involves generating formulae to calculate substrate-independent antenna patch widths produced from the antenna’s characteristic angle. This enables the definition of a relationship between the characteristic angle and the natural resonant frequency of an antenna structure, bridging the changes of resonant frequencies contributed by possible variation in substrate properties. From here, the end ‘calibrated’ results can be used to generate specific formulae for each antenna to determine the width of the patch at different operating frequencies, making it time- and resource-efficient. This method was validated using conventional and slotted antennas designed using different substrates, both rigid (RO4003C, Rogers RT/Duroid 5880) and conventional (felt, denim fabric). Measurement results obtained were in satisfactory agreement with simulated results, even without considering the substrates and excitations. Finally, this method was also applied in designing dual-band antennas using flexible materials for wearable applications, indicating good agreement with experimental results. Full article
Show Figures

Figure 1

9 pages, 2084 KiB  
Article
Efficient System Identification of a Two-Wheeled Robot (TWR) Using Feed-Forward Neural Networks
by Muhammad Aseer Khan 1,*, Dur-e-Zehra Baig 2, Husan Ali 1, Bilal Ashraf 1, Shahbaz Khan 1, Abdul Wadood 1,* and Tariq Kamal 3,*
1 Department of Electrical Engineering, Air University, Aerospace & Aviation Campus, Kamra 43570, Pakistan
2 Faculty of Electrical Engineering, Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Topi 23640, Pakistan
3 School of Technology and Innovations, Electrical Engineering, University of Vaasa, 65200 Vaasa, Finland
Electronics 2022, 11(21), 3584; https://doi.org/10.3390/electronics11213584 - 2 Nov 2022
Cited by 2 | Viewed by 2498
Abstract
System identification of a Two-Wheeled Robot (TWR) through nonlinear dynamics is carried out in this paper using a data-driven approach. An Artificial Neural Network (ANN) is used as a kinematic estimator for predicting the TWR’s degree of movement in the directions of x [...] Read more.
System identification of a Two-Wheeled Robot (TWR) through nonlinear dynamics is carried out in this paper using a data-driven approach. An Artificial Neural Network (ANN) is used as a kinematic estimator for predicting the TWR’s degree of movement in the directions of x and y and the angle of rotation Ψ along the z-axis by giving a set of input vectors in terms of linear velocity ‘V’ (i.e., generated through the angular velocity ‘ω’ of a DC motor). The DC motor rotates the TWR’s wheels that have a wheel radius of ‘r’. Training datasets are achieved via simulating nonlinear kinematics of the TWR in a MATLAB Simulink environment by varying the linear scale sets of ‘V’ and ‘(r ± ∆r)’. Perturbation of the TWR’s wheel radius at ∆r = 10% is introduced to cater to the robustness of the TWR wheel kinematics. A trained ANN accurately modeled the kinematics of the TWR. The performance indicators are regression analysis and mean square value, whose achieved values met the targeted values of 1 and 0.01, respectively. Full article
(This article belongs to the Special Issue Neural Networks in Robot-Related Applications)
Show Figures

Figure 1

20 pages, 2084 KiB  
Article
Online Adaptive Dynamic Programming-Based Solution of Networked Multiple-Pursuer and Single-Evader Game
by Zifeng Gong, Bing He *, Chen Hu, Xiaobo Zhang and Weijie Kang
Department of Nuclear Engineering, PLA Rocket Force University of Engineering, Xi’an 710025, China
Electronics 2022, 11(21), 3583; https://doi.org/10.3390/electronics11213583 - 2 Nov 2022
Cited by 6 | Viewed by 2064
Abstract
This paper presents a new scheme for the online solution of a networked multi-agent pursuit–evasion game based on an online adaptive dynamic programming method. As a multi-agent in the game can form an Internet of Things (IoT) system, by incorporating the relative distance [...] Read more.
This paper presents a new scheme for the online solution of a networked multi-agent pursuit–evasion game based on an online adaptive dynamic programming method. As a multi-agent in the game can form an Internet of Things (IoT) system, by incorporating the relative distance and the control energy as the performance index, the expression of the policies when the agents reach the Nash equilibrium is obtained and proved by the minmax principle. By constructing a Lyapunov function, the capture conditions of the game are obtained and discussed. In order to enable each agent to obtain the policy for reaching the Nash equilibrium in real time, the online adaptive dynamic programming method is used to solve the game problem. Furthermore, the parameters of the neural network are fitted by value function approximation, which avoids the difficulties of solving the Hamilton-Jacobi–Isaacs equation, and the numerical solution of the Nash equilibrium is obtained. Simulation results depict the feasibility of the proposed method for use on multi-agent pursuit–evasion games. Full article
(This article belongs to the Special Issue IoT Applications for Renewable Energy Management and Control)
Show Figures

Figure 1

13 pages, 3715 KiB  
Article
Degradation Prediction of GaN HEMTs under Hot-Electron Stress Based on ML-TCAD Approach
by Ke Wang 1, Haodong Jiang 1, Yiming Liao 2,*, Yue Xu 3, Feng Yan 1 and Xiaoli Ji 1,*
1 School of Electronic Science and Engineering, Nanjing University, Nanjing 210046, China
2 School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
3 School of Electronic Science and Engineering, Nanjing University of Posts and Telecommunications, Nanjing 210003, China
Electronics 2022, 11(21), 3582; https://doi.org/10.3390/electronics11213582 - 2 Nov 2022
Cited by 7 | Viewed by 3532
Abstract
In this paper, a novel approach that combines technology computer-aided design (TCAD) simulation and machine learning (ML) techniques is demonstrated to assist the analysis of the performance degradation of GaN HEMTs under hot-electron stress. TCAD is used to simulate the statistical effect of [...] Read more.
In this paper, a novel approach that combines technology computer-aided design (TCAD) simulation and machine learning (ML) techniques is demonstrated to assist the analysis of the performance degradation of GaN HEMTs under hot-electron stress. TCAD is used to simulate the statistical effect of hot-electron-induced, electrically active defects on device performance, while the artificial neural network (ANN) algorithm is tested for reproducing the simulation results. The results show that the ML-TCAD approach can not only rapidly obtain the performance degradation of GaN HEMTs, but can accurately predict the progressive failure under the work conditions with a mean squared error (MSE) of 0.2, informing the possibility of quantitative failure data analysis and rapid defect extraction via the ML-TCAD approach. Full article
(This article belongs to the Section Semiconductor Devices)
Show Figures

Figure 1