Next Issue
Volume 10, September-2
Previous Issue
Volume 10, August-2

Electronics, Volume 10, Issue 17 (September-1 2021) – 139 articles

Cover Story (view full-size image): This work concerns the design and implementation of a new measurement system of this kind currently being deployed throughout the European Organization for Nuclear Research (CERN) accelerator complex. We first discuss the measurement principle, the general system architecture, and the technology employed, focusing in particular on the most critical and specialized components developed, that is, the field marker trigger generator and the magnetic flux integrator. We then present the results of a detailed metrological characterization of the integrator, including the aspects of drift estimation and correction, as well as the absolute gain calibration and frequency response. We finally discuss the latency of the whole acquisition chain and present an outline of future work to improve the capabilities of the system. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
Article
Implementation of an Award-Winning Invasive Fish Recognition and Separation System
Electronics 2021, 10(17), 2182; https://doi.org/10.3390/electronics10172182 - 06 Sep 2021
Viewed by 759
Abstract
The state of Michigan, U.S.A., was awarded USD 1 million in March 2018 for the Great Lakes Invasive Carp Challenge. The challenge sought new and novel technologies to function independently of or in conjunction with those fish deterrents already in place to prevent [...] Read more.
The state of Michigan, U.S.A., was awarded USD 1 million in March 2018 for the Great Lakes Invasive Carp Challenge. The challenge sought new and novel technologies to function independently of or in conjunction with those fish deterrents already in place to prevent the movement of invasive carp species into the Great Lakes from the Illinois River through the Chicago Area Waterway System (CAWS). Our team proposed an environmentally friendly, low-cost, vision-based fish recognition and separation system. The proposed solution won fourth place in the challenge out of 353 participants from 27 countries. The proposed solution includes an underwater imaging system that captures the fish images for processing, fish species recognition algorithm that identify invasive carp species, and a mechanical system that guides the fish movement and restrains invasive fish species for removal. We used our evolutionary learning-based algorithm to recognize fish species, which is considered the most challenging task of this solution. The algorithm was tested with a fish dataset consisted of four invasive and four non-invasive fish species. It achieved a remarkable 1.58% error rate, which is more than adequate for the proposed system, and required only a small number of images for training. This paper details the design of this unique solution and the implementation and testing that were accomplished since the challenge. Full article
(This article belongs to the Special Issue Convolutional Neural Networks and Vision Applications)
Show Figures

Figure 1

Article
A Novel Ultra-Low Power 8T SRAM-Based Compute-in-Memory Design for Binary Neural Networks
Electronics 2021, 10(17), 2181; https://doi.org/10.3390/electronics10172181 - 06 Sep 2021
Viewed by 818
Abstract
We propose a novel ultra-low-power, voltage-based compute-in-memory (CIM) design with a new single-ended 8T SRAM bit cell structure. Since the proposed SRAM bit cell uses a single bitline for CIM calculation with decoupled read and write operations, it supports a much higher energy [...] Read more.
We propose a novel ultra-low-power, voltage-based compute-in-memory (CIM) design with a new single-ended 8T SRAM bit cell structure. Since the proposed SRAM bit cell uses a single bitline for CIM calculation with decoupled read and write operations, it supports a much higher energy efficiency. In addition, to separate read and write operations, the stack structure of the read unit minimizes leakage power consumption. Moreover, the proposed bit cell structure provides better read and write stability due to the isolated read path, write path and greater pull-up ratio. Compared to the state-of-the-art SRAM-CIM, our proposed SRAM-CIM does not require extra transistors for CIM vector-matrix multiplication. We implemented a 16 k (128 × 128) bit cell array for the computation of 128× neurons, and used 64× binary inputs (0 or 1) and 64 × 128 binary weights (−1 or +1) values for the binary neural networks (BNNs). Each row of the bit cell array corresponding to a single neuron consists of a total of 128 cells, 64× cells for dot-product and 64× replicas cells for ADC reference. Additionally, 64× replica cells consist of 32× cells for ADC reference and 32× cells for offset calibration. We used a row-by-row ADC for the quantized outputs of each neuron, which supports 1–7 bits of output for each neuron. The ADC uses the sweeping method using 32× duplicate bit cells, and the sweep cycle is set to 2N1+1, where N is the number of output bits. The simulation is performed at room temperature (27 °C) using 45 nm technology via Synopsys Hspice, and all transistors in bitcells use the minimum size considering the area, power, and speed. The proposed SRAM-CIM has reduced power consumption for vector-matrix multiplication by 99.96% compared to the existing state-of-the-art SRAM-CIM. Furthermore, because of the decoupled reading unit from an internal node of latch, there is no feedback from the reading unit, with read static noise, and margin-free results. Full article
(This article belongs to the Special Issue Applied AI-Based Platform Technology and Application)
Show Figures

Figure 1

Article
Study on Electric Field Modulation and Avalanche Enhancement of SiC/GaN IMPATT Diode
Electronics 2021, 10(17), 2180; https://doi.org/10.3390/electronics10172180 - 06 Sep 2021
Cited by 1 | Viewed by 581
Abstract
This paper proposes a 6H-materials silicon carbide (SiC)/gallium nitride (GaN) heterogeneous p-n structure to replace the GaN homogenous p-n junction to manufacture an impact-ionization-avalanche-transit-time (IMPATT) diode, and the performance of this 6H-SiC/GaN heterojunction single-drift-region (SDR) IMPATT diode is simulated at frequencies above 100 [...] Read more.
This paper proposes a 6H-materials silicon carbide (SiC)/gallium nitride (GaN) heterogeneous p-n structure to replace the GaN homogenous p-n junction to manufacture an impact-ionization-avalanche-transit-time (IMPATT) diode, and the performance of this 6H-SiC/GaN heterojunction single-drift-region (SDR) IMPATT diode is simulated at frequencies above 100 GHz. The performance parameters of the studied device were simulated and compared with the conventional GaN p-n IMPATT diode. The results show that the p-SiC/n-GaN IMPATT performance is significantly improved, and this is reflected in the enhanced characteristics in terms of operating frequency, rf power, and dc-rf conversion efficiency by the two mechanisms. One such characteristic that the new structure has an excessive avalanche injection of electrons in the p-type SiC region owing to the ionization characteristics of the SiC material, while another is a lower electric field distribution in the drift region, which can induce a higher electron velocity and larger current in the structure. The work provides a reference to obtain a deeper understanding of the mechanism and design of IMPATT devices based on wide-bandgap semiconductor materials. Full article
(This article belongs to the Section Optoelectronics)
Show Figures

Figure 1

Review
Electric Power Network Interconnection: A Review on Current Status, Future Prospects and Research Direction
Electronics 2021, 10(17), 2179; https://doi.org/10.3390/electronics10172179 - 06 Sep 2021
Cited by 3 | Viewed by 1003
Abstract
An interconnection of electric power networks enables decarbonization of the electricity system by harnessing and sharing large amounts of renewable energy. The highest potential renewable energy areas are often far from load centers, integrated through long-distance transmission interconnections. The transmission interconnection mitigates the [...] Read more.
An interconnection of electric power networks enables decarbonization of the electricity system by harnessing and sharing large amounts of renewable energy. The highest potential renewable energy areas are often far from load centers, integrated through long-distance transmission interconnections. The transmission interconnection mitigates the variability of renewable energy sources by importing and exporting electricity between neighbouring regions. This paper presents an overview of regional and global energy consumption trends by use of fuel. A large power grid interconnection, including renewable energy and its integration into the utility grid, and globally existing large power grid interconnections are also presented. The technologies used for power grid interconnections include HVAC, HVDC (including LCC, VSC comprising of MMC-VSC, HVDC light), VFT, and newly proposed FASAL are discussed with their potential projects. Future trends of grid interconnection, including clean energy initiatives and developments, UHV AC and DC transmission systems, and smart grid developments, are presented in detail. A review of regional and global initiatives in the context of a sustainable future by implementing electric energy interconnections is presented. It presents the associated challenges and benefits of globally interconnected power grids and intercontinental interconnectors. Finally, in this paper, research directions in clean and sustainable energy, smart grid, UHV transmission systems that facilitate the global future grid interconnection goal are addressed. Full article
Show Figures

Figure 1

Review
An Overview of Wearable Piezoresistive and Inertial Sensors for Respiration Rate Monitoring
Electronics 2021, 10(17), 2178; https://doi.org/10.3390/electronics10172178 - 06 Sep 2021
Cited by 6 | Viewed by 1146
Abstract
The demand for wearable devices to measure respiratory activity is constantly growing, finding applications in a wide range of scenarios (e.g., clinical environments and workplaces, outdoors for monitoring sports activities, etc.). Particularly, the respiration rate (RR) is a vital parameter since it indicates [...] Read more.
The demand for wearable devices to measure respiratory activity is constantly growing, finding applications in a wide range of scenarios (e.g., clinical environments and workplaces, outdoors for monitoring sports activities, etc.). Particularly, the respiration rate (RR) is a vital parameter since it indicates serious illness (e.g., pneumonia, emphysema, pulmonary embolism, etc.). Therefore, several solutions have been presented in the scientific literature and on the market to make RR monitoring simple, accurate, reliable and noninvasive. Among the different transduction methods, the piezoresistive and inertial ones satisfactorily meet the requirements for smart wearable devices since unobtrusive, lightweight and easy to integrate. Hence, this review paper focuses on innovative wearable devices, detection strategies and algorithms that exploit piezoresistive or inertial sensors to monitor the breathing parameters. At first, this paper presents a comprehensive overview of innovative piezoresistive wearable devices for measuring user’s respiratory variables. Later, a survey of novel piezoresistive textiles to develop wearable devices for detecting breathing movements is reported. Afterwards, the state-of-art about wearable devices to monitor the respiratory parameters, based on inertial sensors (i.e., accelerometers and gyroscopes), is presented for detecting dysfunctions or pathologies in a non-invasive and accurate way. In this field, several processing tools are employed to extract the respiratory parameters from inertial data; therefore, an overview of algorithms and methods to determine the respiratory rate from acceleration data is provided. Finally, comparative analysis for all the covered topics are reported, providing useful insights to develop the next generation of wearable sensors for monitoring respiratory parameters. Full article
(This article belongs to the Special Issue 10th Anniversary of Electronics: Hot Topics in Bioelectronics)
Show Figures

Figure 1

Article
Scrolling-Aware Rendering to Reduce Frame Rates on Smartphones
Electronics 2021, 10(17), 2177; https://doi.org/10.3390/electronics10172177 - 06 Sep 2021
Viewed by 741
Abstract
One of the major sources of power drain in smartphones is a frame rendering and display process called graphics pipeline, in which power consumption depends largely on frame rendering operations per second (fps), known as the frame rate, and the quantity of UI [...] Read more.
One of the major sources of power drain in smartphones is a frame rendering and display process called graphics pipeline, in which power consumption depends largely on frame rendering operations per second (fps), known as the frame rate, and the quantity of UI content to be rendered. We discovered a major problem causing power consumption upon a scrolling operation: The Android graphics pipeline renders all or a large portion of the content displayed most recently at a frame rate of nearly 60 fps. This paper proposes a scrolling-aware rendering (SCAR) scheme to reduce the frame rate caused by a scrolling. When rendering a frame for UI content to be displayed, SCAR pre-renders UI content that is likely to be displayed soon in any subsequent scrolling operation. This frame is extended to place the pre-rendered UI content contiguously with the UI content to be displayed. Upon a subsequent scrolling, SCAR repositions the extended frame on screen by a scrolling distance instead of rendering a new frame. Our experiments on a smartphone show that SCAR reduced frame rates to below one fps in scrolling, thus saving power by up to 30%. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

Article
Super-Resolution Model Quantized in Multi-Precision
Electronics 2021, 10(17), 2176; https://doi.org/10.3390/electronics10172176 - 06 Sep 2021
Viewed by 606
Abstract
Deep learning has achieved outstanding results in various tasks in machine learning under the background of rapid increase in equipment’s computing capacity. However, while achieving higher performance and effects, model size is larger, training and inference time longer, the memory and storage occupancy [...] Read more.
Deep learning has achieved outstanding results in various tasks in machine learning under the background of rapid increase in equipment’s computing capacity. However, while achieving higher performance and effects, model size is larger, training and inference time longer, the memory and storage occupancy increasing, the computing efficiency shrinking, and the energy consumption augmenting. Consequently, it’s difficult to let these models run on edge devices such as micro and mobile devices. Model compression technology is gradually emerging and researched, for instance, model quantization. Quantization aware training can take more accuracy loss resulting from data mapping in model training into account, which clamps and approximates the data when updating parameters, and introduces quantization errors into the model loss function. In quantization, we found that some stages of the two super-resolution model networks, SRGAN and ESRGAN, showed sensitivity to quantization, which greatly reduced the performance. Therefore, we use higher-bits integer quantization for the sensitive stage, and train the model together in quantization aware training. Although model size was sacrificed a little, the accuracy approaching the original model was achieved. The ESRGAN model was still reduced by nearly 67.14% and SRGAN model was reduced by nearly 68.48%, and the inference time was reduced by nearly 30.48% and 39.85% respectively. What’s more, the PI values of SRGAN and ESRGAN are 2.1049 and 2.2075 respectively. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

Article
Proxy-Based Adaptive Transmission of MP-QUIC in Internet-of-Things Environment
Electronics 2021, 10(17), 2175; https://doi.org/10.3390/electronics10172175 - 06 Sep 2021
Viewed by 730
Abstract
With the growth of Internet of Things (IoT) services and applications, the efficient transmission of IoT data has been crucially required. The IETF has recently developed the QUIC protocol for UDP-based multiplexed and secure transport. The Multipath QUIC (MP-QUIC) is also being discussed [...] Read more.
With the growth of Internet of Things (IoT) services and applications, the efficient transmission of IoT data has been crucially required. The IETF has recently developed the QUIC protocol for UDP-based multiplexed and secure transport. The Multipath QUIC (MP-QUIC) is also being discussed as an extension of QUIC in the multipath network environment. In this paper, we propose a proxy-based adaptive MP-QUIC transmission for throughput enhancement in the IoT environment. In the proposed scheme, a proxy device is employed between IoT clients and IoT server to aggregate the traffics of many clients in the access network. The proxy will transport a large among of traffics to the server, adaptively to the network conditions, by using multiple paths in the backbone network. For this purpose, the proxy device employs a path manager to monitor the current network conditions and a connection manager to manage the MP-QUIC connections with the IoT server over the backbone network with multiple paths. For effective MP-QUIC transmission, the proxy will transmit the prioritized packets to the server using the best path with the lowest round-trip time (RTT), whereas the non-prioritized packets are delivered over the other paths for traffic load balancing in the network. From the testbed experimentations with the MP-QUIC implementation and ns-3 simulation modules, we see that the proposed scheme can outperform the normal QUIC (using a single path) and the existing MP-QUIC scheme (using the round-robin policy) in terms of response delay and total transmission delay. Such performance gaps tend to increase as the link delays and packet loss rates get larger in the network. Full article
(This article belongs to the Special Issue IoT Services, Applications, Platform, and Protocols)
Show Figures

Figure 1

Article
Breaking KASLR Using Memory Deduplication in Virtualized Environments
Electronics 2021, 10(17), 2174; https://doi.org/10.3390/electronics10172174 - 06 Sep 2021
Cited by 1 | Viewed by 915
Abstract
Recent operating systems (OSs) have adopted a defense mechanism called kernel page table isolation (KPTI) for protecting the kernel from all attacks that break the kernel address space layout randomization (KASLR) using various side-channel analysis techniques. In this paper, we demonstrate that KASLR [...] Read more.
Recent operating systems (OSs) have adopted a defense mechanism called kernel page table isolation (KPTI) for protecting the kernel from all attacks that break the kernel address space layout randomization (KASLR) using various side-channel analysis techniques. In this paper, we demonstrate that KASLR can still be broken, even with the latest OSs where KPTI is applied. In particular, we present a novel memory-sharing-based side-channel attack that breaks the KASLR on KPTI-enabled Linux virtual machines. The proposed attack leverages the memory deduplication feature on a hypervisor, which provides a timing channel for inferring secret information regarding the victim. By conducting experiments on KVM and VMware ESXi, we show that the proposed attack can obtain the kernel address within a short amount of time. We also present several countermeasures that can prevent such an attack. Full article
(This article belongs to the Special Issue Security and Privacy Architecture for Cloud Computing)
Show Figures

Figure 1

Article
Modeling and Analysis of Electromagnetic Field and Temperature Field of Permanent-Magnet Synchronous Motor for Automobiles
Electronics 2021, 10(17), 2173; https://doi.org/10.3390/electronics10172173 - 06 Sep 2021
Viewed by 584
Abstract
In order to study the interaction of electromagnetic fields and temperature fields in a motor, the iron loss curve at different frequencies of silicon steel and the B-H curve at different temperatures of the permanent magnet (PM) were obtained to establish the electromagnetic [...] Read more.
In order to study the interaction of electromagnetic fields and temperature fields in a motor, the iron loss curve at different frequencies of silicon steel and the B-H curve at different temperatures of the permanent magnet (PM) were obtained to establish the electromagnetic model of the permanent magnet synchronous motor (PMSM). Then, unidirectional and bidirectional coupling models were established and analyzed based on the multi-physical field. By establishing a bidirectional coupling model, the temperature field distribution and electromagnetic characteristics of the motor were analyzed. The interaction between temperature and electromagnetic field was studied. Finally, the temperature of the PMSM was tested. The results showed that the bidirectional coupling results were closer to the test result because of the consideration of the interaction between electromagnetic and thermal fields. Full article
(This article belongs to the Section Microwave and Wireless Communications)
Show Figures

Figure 1

Article
A Wide-Angle Scanning Sub-Terahertz Leaky-Wave Antenna Based on a Multilayer Dielectric Image Waveguide
Electronics 2021, 10(17), 2172; https://doi.org/10.3390/electronics10172172 - 06 Sep 2021
Cited by 2 | Viewed by 609
Abstract
This paper presents a new layered dielectric leaky-wave antenna (LWA) for the sub-terahertz (THz) frequency range capable of efficient operation at the broadside with a wide beam scanning angle and stable gain. It consists of a conductor-backed alumina dielectric image line (DIL) with [...] Read more.
This paper presents a new layered dielectric leaky-wave antenna (LWA) for the sub-terahertz (THz) frequency range capable of efficient operation at the broadside with a wide beam scanning angle and stable gain. It consists of a conductor-backed alumina dielectric image line (DIL) with two different dielectric layers mounted on top of each other for performance improvement. The upper layer is a high permittivity RO6010 substrate to enhance the directivity as a superstrate and the lower layer is a low-permittivity RT/duroid 5880 substrate stacked on the alumina DIL to prevent the probable excitation of higher-order modes in the DIL channel. A 15-element linear array of radiating overlapped discs is used to mitigate the open stop-band (OSB) problem, fed by the mentioned waveguide, was designed and simulated at frequencies around 170 GHz. The dominant mode of the layered dielectric waveguide is perturbed by the infinite space harmonics generated by two sets of overlapped discs periodically sandwiched between the layers. It exhibited a relatively wide impedance bandwidth of 28.19% (157.5–206 GHz). Its radiation mechanism has been widely studied through simulations. The results revealed that the antenna provides a wide scanning capability through the broadside from −23° to 38°, covering the frequency range between 157.5 GHz and 201.5 GHz. For an array with 15 radiating elements, the simulated peak gain in the band is 15 dBi and the broadside gain is 13.6 dBi at 172 GHz. Full article
(This article belongs to the Special Issue Antennas for Next-Generation Communication Systems)
Show Figures

Figure 1

Review
A Survey of the Tactile Internet: Design Issues and Challenges, Applications, and Future Directions
Electronics 2021, 10(17), 2171; https://doi.org/10.3390/electronics10172171 - 06 Sep 2021
Cited by 5 | Viewed by 999
Abstract
The Tactile Internet (TI) is an emerging area of research involving 5G and beyond (B5G) communications to enable real-time interaction of haptic data over the Internet between tactile ends, with audio-visual data as feedback. This emerging TI technology is viewed as the next [...] Read more.
The Tactile Internet (TI) is an emerging area of research involving 5G and beyond (B5G) communications to enable real-time interaction of haptic data over the Internet between tactile ends, with audio-visual data as feedback. This emerging TI technology is viewed as the next evolutionary step for the Internet of Things (IoT) and is expected to bring about a massive change in Healthcare 4.0, Industry 4.0 and autonomous vehicles to resolve complicated issues in modern society. This vision of TI makes a dream into a reality. This article aims to provide a comprehensive survey of TI, focussing on design architecture, key application areas, potential enabling technologies, current issues, and challenges to realise it. To illustrate the novelty of our work, we present a brainstorming mind-map of all the topics discussed in this article. We emphasise the design aspects of the TI and discuss the three main sections of the TI, i.e., master, network, and slave sections, with a focus on the proposed application-centric design architecture. With the help of the proposed illustrative diagrams of use cases, we discuss and tabulate the possible applications of the TI with a 5G framework and its requirements. Then, we extensively address the currently identified issues and challenges with promising potential enablers of the TI. Moreover, a comprehensive review focussing on related articles on enabling technologies is explored, including Fifth Generation (5G), Software-Defined Networking (SDN), Network Function Virtualisation (NFV), Cloud/Edge/Fog Computing, Multiple Access, and Network Coding. Finally, we conclude the survey with several research issues that are open for further investigation. Thus, the survey provides insights into the TI that can help network researchers and engineers to contribute further towards developing the next-generation Internet. Full article
Show Figures

Figure 1

Article
Integration of Extended Reality and a High-Fidelity Simulator in Team-Based Simulations for Emergency Scenarios
Electronics 2021, 10(17), 2170; https://doi.org/10.3390/electronics10172170 - 06 Sep 2021
Cited by 1 | Viewed by 1192
Abstract
Wearable devices such as smart glasses are considered promising assistive tools for information exchange in healthcare settings. We aimed to evaluate the usability and feasibility of smart glasses for team-based simulations constructed using a high-fidelity simulator. Two scenarios of patients with arrhythmia were [...] Read more.
Wearable devices such as smart glasses are considered promising assistive tools for information exchange in healthcare settings. We aimed to evaluate the usability and feasibility of smart glasses for team-based simulations constructed using a high-fidelity simulator. Two scenarios of patients with arrhythmia were developed to establish a procedure for interprofessional interactions via smart glasses using 15-h simulation training. Three to four participants formed a team and played the roles of remote supporter or bed-side trainee with smart glasses. Usability, attitudes towards the interprofessional health care team and learning satisfaction were assessed. Using a 5-point Likert scale, from 1 (strongly disagree) to 5 (strongly agree), 31 participants reported that the smart glasses were easy to use (3.61 ± 0.95), that they felt confident during use (3.90 ± 0.87), and that that responded positively to long-term use (3.26 ± 0.89) and low levels of physical discomfort (1.96 ± 1.06). The learning satisfaction was high (4.65 ± 0.55), and most (84%) participants found the experience favorable. Key challenges included an unstable internet connection, poor resolution and display, and physical discomfort while using the smart glasses with accessories. We determined the feasibility and acceptability of smart glasses for interprofessional interactions within a team-based simulation. Participants responded favorably toward a smart glass-based simulation learning environment that would be applicable in clinical settings. Full article
(This article belongs to the Special Issue LifeXR: Concepts, Technology and Design for Everyday XR)
Show Figures

Figure 1

Article
Automatic Multilingual Stopwords Identification from Very Small Corpora
Electronics 2021, 10(17), 2169; https://doi.org/10.3390/electronics10172169 - 05 Sep 2021
Viewed by 576
Abstract
Tools for Natural Language Processing work using linguistic resources, that are language-specific. The complexity of building such resources causes many languages to lack them. So, learning them automatically from sample texts would be a desirable solution. This usually requires huge training corpora, which [...] Read more.
Tools for Natural Language Processing work using linguistic resources, that are language-specific. The complexity of building such resources causes many languages to lack them. So, learning them automatically from sample texts would be a desirable solution. This usually requires huge training corpora, which are not available for many local languages and jargons, lacking a wide literature. This paper focuses on stopwords, i.e., terms in a text which do not contribute in conveying its topic or content. It provides two main, inter-related and complementary, methodological contributions: (i) it proposes a novel approach based on term and document frequency to rank candidate stopwords, that works also on very small corpora (even single documents); and (ii) it proposes an automatic cutoff strategy to select the best candidates in the ranking, thus addressing one of the most critical problems in the stopword identification practice. Nice features of these approaches are that (i) they are generic and applicable to different languages, (ii) they are fully automatic, and (iii) they do not require any previous linguistic knowledge. Extensive experiments show that both are extremely effective and reliable. The former outperforms all comparable approaches in the state-of-the-art, both in terms of performance (Precision stays at 100% or nearly so for a large portion of the top-ranked candidate stopwords, while Recall is quite close to the maximum reachable in theory.) and in smooth behavior (Precision is monotonically decreasing, and Recall is monotonically increasing, allowing the experimenter to choose the preferred balance.). The latter is more flexible than existing solutions in the literature, requiring just one parameter intuitively related to the balance between Precision and Recall one wishes to obtain. Full article
Show Figures

Figure 1

Article
Theory and Design of a Flexible Two-Stage Wideband Wilkinson Power Divider
Electronics 2021, 10(17), 2168; https://doi.org/10.3390/electronics10172168 - 05 Sep 2021
Viewed by 648
Abstract
This article presents the design scheme of a wideband Wilkinson Power Divider (WPD) with two-stage architecture utilizing quarter-wave transmission lines and short-circuit stubs. The bandwidth of the proposed WPD is flexible and can be controlled using the design parameters. The proposed design achieves [...] Read more.
This article presents the design scheme of a wideband Wilkinson Power Divider (WPD) with two-stage architecture utilizing quarter-wave transmission lines and short-circuit stubs. The bandwidth of the proposed WPD is flexible and can be controlled using the design parameters. The proposed design achieves excellent isolation between output ports in addition good in-band performance. The analysis of the proposed circuit results in a simplified transfer function which is then equated with a standard band-pass transfer function to determine the parameters of transmission lines, stub’s impedances, and the value of the isolation resistors. Furthermore, it is also demonstrated that a simple alteration in the proposed circuit enables the design of a wideband DC isolated WPD that maintains a good in-band and isolation performance. A number of case studies have been included to highlight the flexibility of the proposed design. Two distinct prototypes are developed on different boards to demonstrate the wideband performance of the proposed design. An excellent agreement between the experimental and measured results for both the designs over a wide band including very good isolation between ports validate the proposed design. Full article
(This article belongs to the Section Microwave and Wireless Communications)
Show Figures

Figure 1

Article
Calculation Methodologies of Complex Permeability for Various Magnetic Materials
Electronics 2021, 10(17), 2167; https://doi.org/10.3390/electronics10172167 - 05 Sep 2021
Cited by 1 | Viewed by 835
Abstract
In order to design power converters and wireless power systems using high-frequency magnetic materials, the magnetic characteristics for the inductors and transformers should be specified in detail w.r.t. the operating frequency. For investigating the complex permeability of the magnetic materials by simply test [...] Read more.
In order to design power converters and wireless power systems using high-frequency magnetic materials, the magnetic characteristics for the inductors and transformers should be specified in detail w.r.t. the operating frequency. For investigating the complex permeability of the magnetic materials by simply test prototypes, the inductor model-based calculation methodologies for the complex permeability are suggested to find the core loss characteristics in this paper. Based on the measured results of the test voltage Ve, current Ie, and phase difference θe, which can be obtained simply by an oscilloscope and a function generator, the real and imaginary permeability can be calculated w.r.t. operating frequency by the suggested calculation methodologies. Such information for the real and imaginary permeability is important to determine the size of the magnetic components and to analyze the core loss. To identify the superiority of the high-frequency magnetic materials, three prototypes for a ferrite core, amorphous core, and nanocrystalline core have been built and verified by experiment. As a result, the ferrite core is superior to the other cores for core loss, and the nanocrystalline core is recommended for compact transformer applications. The proposed calculation for the complex (i.e., real and imaginary) permeability, which has not been revealed in the datasheets, provides a way to easily determine the parameters useful for industrial electronics engineers. Full article
(This article belongs to the Special Issue Advanced Magnetic and Electrical Characterization Techniques)
Show Figures

Figure 1

Article
Deep Learning-Based Indoor Two-Dimensional Localization Scheme Using a Frequency-Modulated Continuous Wave Radar
Electronics 2021, 10(17), 2166; https://doi.org/10.3390/electronics10172166 - 05 Sep 2021
Cited by 4 | Viewed by 684
Abstract
In this paper, we propose a deep learning-based indoor two-dimensional (2D) localization scheme using a 24 GHz frequency-modulated continuous wave (FMCW) radar. In the proposed scheme, deep neural network and convolutional neural network (CNN) models that use different numbers of FMCW radars were [...] Read more.
In this paper, we propose a deep learning-based indoor two-dimensional (2D) localization scheme using a 24 GHz frequency-modulated continuous wave (FMCW) radar. In the proposed scheme, deep neural network and convolutional neural network (CNN) models that use different numbers of FMCW radars were employed to overcome the limitations of the conventional 2D localization scheme that is based on multilateration methods. The performance of the proposed scheme was evaluated experimentally and compared with the conventional scheme under the same conditions. According to the results, the 2D location of the target could be estimated with a proposed single radar scheme, whereas two FMCW radars were required by the conventional scheme. Furthermore, the proposed CNN scheme with two FMCW radars produced an average localization error of 0.23 m, while the error of the conventional scheme with two FMCW radars was 0.53 m. Full article
Show Figures

Figure 1

Article
Ultra-Wideband Reconfigurable X-Band and Ku-Band Metasurface Beam-Steerable Reflector for Satellite Communications
Electronics 2021, 10(17), 2165; https://doi.org/10.3390/electronics10172165 - 04 Sep 2021
Cited by 3 | Viewed by 868
Abstract
A continuously reconfigurable metasurface reflector based on unit cell mushroom geometry that was integrated with a varactor diode is presented in this paper. The unit cell of the metasurface was designed and optimized to operate in the X-band and Ku-band, improving satellite communication’s [...] Read more.
A continuously reconfigurable metasurface reflector based on unit cell mushroom geometry that was integrated with a varactor diode is presented in this paper. The unit cell of the metasurface was designed and optimized to operate in the X-band and Ku-band, improving satellite communication’s quality of service. The losses mechanisms of continuous control over the unit cell phase reflection in beam steering resolution are considered and the analysis results are presented. The unit cell design parameters were analyzed with an emphasis on losses and dynamic reflection phase range. The unit cell magnitude and phase reflection are shown in the wide frequency bandwidth and showed a good agreement between all the measurements and the simulations. This metasurface enabled a high dynamic range in the unit cell resonant frequency range from 7.8 to 15 GHz. In addition, the reflection phase and absorption calibration are demonstrated for multiple operating frequencies, namely, 11 GHz, 12 GHz, and 13.5 GHz. Furthermore, design trade-offs and manufacturing limitations were considered. Finally, a beam-steering simulation using the designed metasurface is shown and discussed. Full article
(This article belongs to the Special Issue State-of-the-Art in Satellite Communication Networks)
Show Figures

Figure 1

Article
Accurate Realtime Motion Estimation Using Optical Flow on an Embedded System
Electronics 2021, 10(17), 2164; https://doi.org/10.3390/electronics10172164 - 04 Sep 2021
Cited by 1 | Viewed by 624
Abstract
Motion estimation has become one of the most important techniques used in realtime computer vision application. There are several algorithms to estimate object motions. One of the most widespread techniques consists of calculating the apparent velocity field observed between two successive images of [...] Read more.
Motion estimation has become one of the most important techniques used in realtime computer vision application. There are several algorithms to estimate object motions. One of the most widespread techniques consists of calculating the apparent velocity field observed between two successive images of the same scene, known as the optical flow. However, the high accuracy of dense optical flow estimation is costly in run time. In this context, we designed an accurate motion estimation system based on the calculation of the optical flow of a moving object using the Lucas–Kanade algorithm. Our approach was applied on a local treatment region implemented into Raspberry Pi 4, with several improvements. The efficiency of our accurate realtime implementation was demonstrated by the experimental results, showing better performance than with the conventional calculation. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

Article
Dimming Techniques Focusing on the Improvement in Luminous Efficiency for High-Brightness LEDs
Electronics 2021, 10(17), 2163; https://doi.org/10.3390/electronics10172163 - 04 Sep 2021
Viewed by 707
Abstract
The pulse width modulation (PWM) dimming mode features good dimming linearity and has been widely used for driving high-brightness light-emitting diodes (HBLEDs), in which the brightness change is reached by modulating the duty cycle of the dimming signal to regulate the average current [...] Read more.
The pulse width modulation (PWM) dimming mode features good dimming linearity and has been widely used for driving high-brightness light-emitting diodes (HBLEDs), in which the brightness change is reached by modulating the duty cycle of the dimming signal to regulate the average current flowing through LEDs. However, the current-illuminance characteristic curve of most LEDs is nonlinear in nature. Namely, under the same lighting power fed, the conventional PWM dimming cannot make the LED exert its best luminous efficiency (LE) specified in datasheets. This paper focuses on the study of further improving LED luminous efficacy via dimming manipulation. Thereby, two multilevel current dimming techniques with varied dimming signal voltage and varied current sensing resistance are presented. With limited dimming capability, the proposed dimming strategies can efficiently raise the luminous flux ratio without increasing the power consumption. A prototype constructed for a 115 W HBLED driver is developed and the devised dimming schemes are realized by a digital signal controller (DSC). Experimental results exhibited with illuminance-power curves and CIE1931 and CIE1976 chromaticity diagrams are given to validate the theoretical derivation and effectiveness. Compared with conventional PWM dimming, under the same illuminance, the driver average output power is respectively reduced by 17.08% and 13.17%; the improvement in average illuminance under the same output power is 13.66% and 11.17%, respectively. In addition, the entire average LE boost has respectively increased by 21.36% and 16.37%. Full article
(This article belongs to the Special Issue Electronic Devices on Intelligent IoT Applications)
Show Figures

Graphical abstract

Article
On Analyzing Beamforming Implementation in O-RAN 5G
Electronics 2021, 10(17), 2162; https://doi.org/10.3390/electronics10172162 - 04 Sep 2021
Cited by 1 | Viewed by 1150
Abstract
The open radio access network (O-RAN) concept is changing the landscape of mobile networks (5G deployment and 6G research). O-RAN Alliance’s suggestions that O-RAN can offer openness and intelligence to the traditional RAN vendors will enable the capability for multi-vendors to re-shape the [...] Read more.
The open radio access network (O-RAN) concept is changing the landscape of mobile networks (5G deployment and 6G research). O-RAN Alliance’s suggestions that O-RAN can offer openness and intelligence to the traditional RAN vendors will enable the capability for multi-vendors to re-shape the RAN structure and optimize the network. This paper positions the main research challenges of the O-RAN approach in regards to the implementation of beamforming. We investigate the O-RAN architecture and the configurations of the interfaces between O-RAN units and present the split options between the radio and distributing units in terms of O-RAN specification and 3GPP standards. From this point, we discuss the beamforming methods in O-RAN, addressing challenges and potential solutions, and suggest the introduction of the zero-forcing equalizer as a precoding vector in the channel-information-based beamforming method. This may be one of the solutions for achieving flexibility in a high-traffic communication environment while reducing the radio unit interferences caused by implanting the precoding in the open radio unit. Full article
(This article belongs to the Special Issue Telecommunication Networks)
Show Figures

Figure 1

Article
Development and Verification of Infrastructure-Assisted Automated Driving Functions
Electronics 2021, 10(17), 2161; https://doi.org/10.3390/electronics10172161 - 04 Sep 2021
Cited by 2 | Viewed by 975
Abstract
Automated vehicles we have on public roads today are capable of up to SAE Level-3 conditional autonomy according to the SAE J3016 Standard taxonomy, where the driver is the main responsible for the driving safety. All the decision-making processes of the system depend [...] Read more.
Automated vehicles we have on public roads today are capable of up to SAE Level-3 conditional autonomy according to the SAE J3016 Standard taxonomy, where the driver is the main responsible for the driving safety. All the decision-making processes of the system depend on computations performed on the ego vehicle and utilizing only on-board sensor information, mimicking the perception of a human driver. It can be conjectured that for higher levels of autonomy, on-board sensor information will not be sufficient alone. Infrastructure assistance will, therefore, be necessary to ensure the partial or full responsibility of the driving safety. With higher penetration rates of automated vehicles however, new problems will arise. It is expected that automated driving and particularly automated vehicle platoons will lead to more road damage in the form of rutting. Inspired by this, the EU project ESRIUM investigates infrastructure assisted routing recommendations utilizing C-ITS communications. In this respect, specially designed ADAS functions are being developed with capabilities to adapt their behavior according to specific routing recommendations. Automated vehicles equipped with such ADAS functions will be able to reduce road damage. The current paper presents the specific use cases, as well as the developed C-ITS assisted ADAS functions together with their verification results utilizing a simulation framework. Full article
Show Figures

Graphical abstract

Article
Exploiting the Outcome of Outlier Detection for Novel Attack Pattern Recognition on Streaming Data
Electronics 2021, 10(17), 2160; https://doi.org/10.3390/electronics10172160 - 04 Sep 2021
Cited by 1 | Viewed by 793
Abstract
Future-oriented networking infrastructures are characterized by highly dynamic Streaming Data (SD) whose volume, speed and number of dimensions increased significantly over the past couple of years, energized by trends such as Software-Defined Networking or Artificial Intelligence. As an essential core component of network [...] Read more.
Future-oriented networking infrastructures are characterized by highly dynamic Streaming Data (SD) whose volume, speed and number of dimensions increased significantly over the past couple of years, energized by trends such as Software-Defined Networking or Artificial Intelligence. As an essential core component of network security, Intrusion Detection Systems (IDS) help to uncover malicious activity. In particular, consecutively applied alert correlation methods can aid in mining attack patterns based on the alerts generated by IDS. However, most of the existing methods lack the functionality to deal with SD data affected by the phenomenon called concept drift and are mainly designed to operate on the output from signature-based IDS. Although unsupervised Outlier Detection (OD) methods have the ability to detect yet unknown attacks, most of the alert correlation methods cannot handle the outcome of such anomaly-based IDS. In this paper, we introduce a novel framework called Streaming Outlier Analysis and Attack Pattern Recognition, denoted as SOAAPR, which is able to process the output of various online unsupervised OD methods in a streaming fashion to extract information about novel attack patterns. Three different privacy-preserving, fingerprint-like signatures are computed from the clustered set of correlated alerts by SOAAPR, which characterizes and represents the potential attack scenarios with respect to their communication relations, their manifestation in the data’s features and their temporal behavior. Beyond the recognition of known attacks, comparing derived signatures, they can be leveraged to find similarities between yet unknown and novel attack patterns. The evaluation, which is split into two parts, takes advantage of attack scenarios from the widely-used and popular CICIDS2017 and CSE-CIC-IDS2018 datasets. Firstly, the streaming alert correlation capability is evaluated on CICIDS2017 and compared to a state-of-the-art offline algorithm, called Graph-based Alert Correlation (GAC), which has the potential to deal with the outcome of anomaly-based IDS. Secondly, the three types of signatures are computed from attack scenarios in the datasets and compared to each other. The discussion of results, on the one hand, shows that SOAAPR can compete with GAC in terms of alert correlation capability leveraging four different metrics and outperforms it significantly in terms of processing time by an average factor of 70 in 11 attack scenarios. On the other hand, in most cases, all three types of signatures seem to reliably characterize attack scenarios such that similar ones are grouped together, with up to 99.05% similarity between the FTP and SSH Patator attack. Full article
(This article belongs to the Special Issue Data Security)
Show Figures

Figure 1

Article
Cross-Domain Classification of Physical Activity Intensity: An EDA-Based Approach Validated by Wrist-Measured Acceleration and Physiological Data
Electronics 2021, 10(17), 2159; https://doi.org/10.3390/electronics10172159 - 04 Sep 2021
Viewed by 603
Abstract
Performing regular physical activity positively affects individuals’ quality of life in both the short- and long-term and also contributes to the prevention of chronic diseases. However, exerted effort is subjectively perceived from different individuals. Therefore, this work explores an out-of-laboratory approach using a [...] Read more.
Performing regular physical activity positively affects individuals’ quality of life in both the short- and long-term and also contributes to the prevention of chronic diseases. However, exerted effort is subjectively perceived from different individuals. Therefore, this work explores an out-of-laboratory approach using a wrist-worn device to classify the perceived intensity of physical effort based on quantitative measured data. First, the exerted intensity is classified by two machine learning algorithms, namely the Support Vector Machine and the Bagged Tree, fed with features computed on heart-related parameters, skin temperature, and wrist acceleration. Then, the outcomes of the classification are exploited to validate the use of the Electrodermal Activity signal alone to rate the perceived effort. The results show that the Support Vector Machine algorithm applied on physiological and acceleration data effectively predicted the relative physical activity intensities, while the Bagged Tree performed best when the Electrodermal Activity data were the only data used. Full article
(This article belongs to the Special Issue Machine Learning and Deep Learning for Biosignals Interpretation)
Show Figures

Figure 1

Article
Conceptualization and Analysis of a Next-Generation Ultra-Compact 1.5-kW PCB-Integrated Wide-Input-Voltage-Range 12V-Output Industrial DC/DC Converter Module
Electronics 2021, 10(17), 2158; https://doi.org/10.3390/electronics10172158 - 04 Sep 2021
Cited by 2 | Viewed by 977
Abstract
The next-generation industrial environment requires power supplies that are compact, efficient, low-cost, and ultra-reliable, even across mains failures, to power mission-critical electrified processes. Hold-up time requirements and the demand for ultra-high power density and minimum production costs, in particular, drive the need for [...] Read more.
The next-generation industrial environment requires power supplies that are compact, efficient, low-cost, and ultra-reliable, even across mains failures, to power mission-critical electrified processes. Hold-up time requirements and the demand for ultra-high power density and minimum production costs, in particular, drive the need for power converters with (i) a wide input voltage range, to reduce the size of the hold-up capacitor, (ii) soft-switching over the full input voltage and load ranges, to achieve low losses that facilitate a compact realization, and (iii) complete PCB-integration for low-cost manufacturing. In this work, we conceptualize, design, model, fabricate, and characterize a 1.5 kW, 12 V-output DC/DC converter for industrial power supplies that is required to operate across a wide 300 V–430 V input voltage range. This module utilizes an LLC-based control scheme for complete soft-switching and a snake-core transformer to divide the output current with a balanced flux among multiple secondary windings. Detailed loss models are derived for every component in the converter. The converter achieves close to 96% peak efficiency with a power density of 337 W in3 (20.6 kW/dm3), excellent matching to the derived loss models, and zero-voltage switching even down to zero load. The loss models are used to identify improvements to further boost efficiency, the most important of which is the minimization of delay times in synchronous rectification, and a subsequent improved 1.5 kW hardware module eliminates nearly 25% of converter losses for a peak efficiency of nearly 97% with a power density of 308 W in3 (18.8 kW dm3). Two 1.5 kW modules are then paralleled to achieve 3 kW output power at 12 V and 345 W in3 (21.1 kW dm3) with ideal current sharing between the secondary outputs and no drop in efficiency from a single module, an important characteristic enabled by the novel snake-core transformer. Full article
(This article belongs to the Special Issue Advances in Low Power and High Power Electronics)
Show Figures

Graphical abstract

Article
NDF and PSF Analysis in Inverse Source and Scattering Problems for Circumference Geometries
Electronics 2021, 10(17), 2157; https://doi.org/10.3390/electronics10172157 - 04 Sep 2021
Cited by 2 | Viewed by 1726
Abstract
This paper aims at discussing the resolution achievable in the reconstruction of both circumference sources from their radiated far-field and circumference scatterers from their scattered far-field observed for the 2D scalar case. The investigation is based on an inverse problem approach, requiring the [...] Read more.
This paper aims at discussing the resolution achievable in the reconstruction of both circumference sources from their radiated far-field and circumference scatterers from their scattered far-field observed for the 2D scalar case. The investigation is based on an inverse problem approach, requiring the analysis of the spectral decomposition of the pertinent linear operator by the Singular Value Decomposition (SVD). The attention is focused upon the evaluation of the Number of Degrees of Freedom (NDF), connected to singular values behavior, and of the Point Spread Function (PSF), which accounts for the reconstruction of a point-like unknown and depends on both the NDF and on the singular functions. A closed-form evaluation of the PSF relevant to the inverse source problem is first provided. In addition, an approximated closed-form evaluation is introduced and compared with the exact one. This is important for the subsequent evaluation of the PSF relevant to the inverse scattering problem, which is based on a similar approximation. In this case, the approximation accuracy of the PSF is verified at least in its main lobe region by numerical simulation since it is the most critical one as far as the resolution discussion is concerned. The main result of the analysis is the space invariance of the PSF when the observation is the full angle in the far-zone region, showing that resolution remains unchanged over the entire source/investigation domain in the considered geometries. The paper also poses the problem of identifying the minimum number and the optimal directions of the impinging plane waves in the inverse scattering problem to achieve the full NDF; some numerical results about it are presented. Finally, a numerical application of the PSF concept is performed in inverse scattering, and its relevance in the presence of noisy data is outlined. Full article
(This article belongs to the Section Microwave and Wireless Communications)
Show Figures

Figure 1

Article
Linked-Object Dynamic Offloading (LODO) for the Cooperation of Data and Tasks on Edge Computing Environment
Electronics 2021, 10(17), 2156; https://doi.org/10.3390/electronics10172156 - 03 Sep 2021
Cited by 1 | Viewed by 576
Abstract
With the evolution of the Internet of Things (IoT), edge computing technology is using to process data rapidly increasing from various IoT devices efficiently. Edge computing offloading reduces data processing time and bandwidth usage by processing data in real-time on the device where [...] Read more.
With the evolution of the Internet of Things (IoT), edge computing technology is using to process data rapidly increasing from various IoT devices efficiently. Edge computing offloading reduces data processing time and bandwidth usage by processing data in real-time on the device where the data is generating or on a nearby server. Previous studies have proposed offloading between IoT devices through local-edge collaboration from resource-constrained edge servers. However, they did not consider nearby edge servers in the same layer with computing resources. Consequently, quality of service (QoS) degrade due to restricted resources of edge computing and higher execution latency due to congestion. To handle offloaded tasks in a rapidly changing dynamic environment, finding an optimal target server is still challenging. Therefore, a new cooperative offloading method to control edge computing resources is needed to allocate limited resources between distributed edges efficiently. This paper suggests the LODO (linked-object dynamic offloading) algorithm that provides an ideal balance between edges by considering the ready state or running state. LODO algorithm carries out tasks in the list in the order of high correlation between data and tasks through linked objects. Furthermore, dynamic offloading considers the running status of all cooperative terminals and decides to schedule task distribution. That can decrease the average delayed time and average power consumption of terminals. In addition, the resource shortage problem can settle by reducing task processing using its distributions. Full article
(This article belongs to the Special Issue Edge Computing for Internet of Things)
Show Figures

Figure 1

Review
A Survey on QoE-Oriented VR Video Streaming: Some Research Issues and Challenges
Electronics 2021, 10(17), 2155; https://doi.org/10.3390/electronics10172155 - 03 Sep 2021
Cited by 1 | Viewed by 858
Abstract
With the advent of the information age, VR video streaming services have emerged in large numbers in scenarios such as immersive entertainment, smart education, and the Internet of Vehicles. People are also demanding an increasing number of virtual-reality (VR) services, and service providers [...] Read more.
With the advent of the information age, VR video streaming services have emerged in large numbers in scenarios such as immersive entertainment, smart education, and the Internet of Vehicles. People are also demanding an increasing number of virtual-reality (VR) services, and service providers must ensure a good user experience. Therefore, the quality of the VR user’s experience is receiving increasing attention from academia and industry. The review in this paper focuses on a comprehensive summary of the current state of quality-of-experience (QoE) technologies applied to VR video streaming. First, we review the main influencing factors of QoE and VR video streaming. Second, the user QoE for VR evaluation is discussed. Third, the modeling of QoE for VR video streaming, the QoE-oriented VR optimization problem, and enabling techniques of machine learning for VR video streaming improvement are summarized. Lastly, we present current challenges and possible future research directions. Full article
(This article belongs to the Section Electronic Multimedia)
Show Figures

Graphical abstract

Article
Co-Simulation Analysis for Performance Prediction of Synchronous Reluctance Drives
Electronics 2021, 10(17), 2154; https://doi.org/10.3390/electronics10172154 - 03 Sep 2021
Cited by 2 | Viewed by 889
Abstract
To improve the design of electric drives and to better predict the system performance, numerical simulation has been widely employed. Whereas in the majority of the approaches, the machines and the power electronics are designed and simulated separately, to improve the fidelity, a [...] Read more.
To improve the design of electric drives and to better predict the system performance, numerical simulation has been widely employed. Whereas in the majority of the approaches, the machines and the power electronics are designed and simulated separately, to improve the fidelity, a co-simulation should be performed. This paper presents a complete coupled co-simulation model of synchronous reluctance machine (SynRel) drive, which includes the finite element model of the SynRel, the power electronics inverter, the control system, and application examples. The model of SynRel is based on a finite element model (FEM) using Simcenter MagNet. The power electronics inverter is built using PLECS Blockset, and the drive control model is built in Simulink environment, which allows for coupling between MagNet and PLECS. The proposed simulation model provides high accuracy thanks to the complete FEA-based model fed by actual inverter voltage. The comparison of the simulation results with experimental measurements shows good correspondence. Full article
(This article belongs to the Special Issue Power Electronics and Control of High-Speed Electrical Drives)
Show Figures

Figure 1

Article
Automatic Estimation of Food Intake Amount Using Visual and Ultrasonic Signals
Electronics 2021, 10(17), 2153; https://doi.org/10.3390/electronics10172153 - 03 Sep 2021
Cited by 1 | Viewed by 524
Abstract
The continuous monitoring and recording of food intake amount without user intervention is very useful in the prevention of obesity and metabolic diseases. I adopted a technique that automatically recognizes food intake amount by combining the identification of food types through image recognition [...] Read more.
The continuous monitoring and recording of food intake amount without user intervention is very useful in the prevention of obesity and metabolic diseases. I adopted a technique that automatically recognizes food intake amount by combining the identification of food types through image recognition and a technique that uses acoustic modality to recognize chewing events. The accuracy of using audio signal to detect eating activity is seriously degraded in a noisy environment. To alleviate this problem, contact sensing methods have conventionally been adopted, wherein sensors are attached to the face or neck region to reduce external noise. Such sensing methods, however, cause dermatological discomfort and a feeling of cosmetic unnaturalness for most users. In this study, a noise-robust and non-contact sensing method was employed, wherein ultrasonic Doppler shifts were used to detect chewing events. The experimental results showed that the mean absolute percentage errors (MAPEs) of an ultrasonic-based method were comparable with those of the audio-based method (15.3 vs. 14.6) when 30 food items were used for experiments. The food intake amounts were estimated for eight subjects in several noisy environments (cafeterias, restaurants, and home dining rooms). For all subjects, the estimation accuracy of the ultrasonic method was not degraded (the average MAPE was 15.02) even under noisy conditions. These results show that the proposed method has the potential to replace the manual logging method. Full article
(This article belongs to the Special Issue Ultrasonic Pattern Recognition by Machine Learning)
Show Figures

Figure 1

Previous Issue
Back to TopTop