Editor’s Choice Articles

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 551 KiB  
Review
Cybersecurity in Autonomous Vehicles—Are We Ready for the Challenge?
by Irmina Durlik, Tymoteusz Miller, Ewelina Kostecka, Zenon Zwierzewicz and Adrianna Łobodzińska
Electronics 2024, 13(13), 2654; https://doi.org/10.3390/electronics13132654 - 6 Jul 2024
Cited by 7 | Viewed by 13268
Abstract
The rapid development and deployment of autonomous vehicles (AVs) present unprecedented opportunities and challenges in the transportation sector. While AVs promise enhanced safety, efficiency, and convenience, they also introduce significant cybersecurity vulnerabilities due to their reliance on advanced electronics, connectivity, and artificial intelligence [...] Read more.
The rapid development and deployment of autonomous vehicles (AVs) present unprecedented opportunities and challenges in the transportation sector. While AVs promise enhanced safety, efficiency, and convenience, they also introduce significant cybersecurity vulnerabilities due to their reliance on advanced electronics, connectivity, and artificial intelligence (AI). This review examines the current state of cybersecurity in autonomous vehicles, identifying major threats such as remote hacking, sensor manipulation, data breaches, and denial of service (DoS) attacks. It also explores existing countermeasures including intrusion detection systems (IDSs), encryption, over-the-air (OTA) updates, and authentication protocols. Despite these efforts, numerous challenges remain, including the complexity of AV systems, lack of standardization, latency issues, and resource constraints. This review concludes by highlighting future directions in cybersecurity research and development, emphasizing the potential of AI and machine learning, blockchain technology, industry collaboration, and legislative measures to enhance the security of autonomous vehicles. Full article
(This article belongs to the Special Issue Autonomous and Connected Vehicles)
Show Figures

Figure 1

13 pages, 579 KiB  
Article
Evaluating Deep Learning Resilience in Retinal Fundus Classification with Generative Adversarial Networks Generated Images
by Marcello Di Giammarco, Antonella Santone, Mario Cesarelli, Fabio Martinelli and Francesco Mercaldo
Electronics 2024, 13(13), 2631; https://doi.org/10.3390/electronics13132631 - 4 Jul 2024
Cited by 1 | Viewed by 997
Abstract
The evaluation of Generative Adversarial Networks in the medical domain has shown significant potential for various applications, including adversarial machine learning on medical imaging. This study specifically focuses on assessing the resilience of Convolutional Neural Networks in differentiating between real and Generative Adversarial [...] Read more.
The evaluation of Generative Adversarial Networks in the medical domain has shown significant potential for various applications, including adversarial machine learning on medical imaging. This study specifically focuses on assessing the resilience of Convolutional Neural Networks in differentiating between real and Generative Adversarial Network-generated retinal images. The main contributions of this research include the training and testing of Convolutional Neural Networks to evaluate their ability to distinguish real images from synthetic ones. By identifying networks with optimal performances, the study ensures the development of better models for diagnostic classification, enhancing generalization and resilience to adversarial images. Overall, the aim of the study is to demonstrate that the application of Generative Adversarial Networks can improve the resilience of the tested networks, resulting in better classifiers for retinal images. In particular, a network developed by authors, i.e., Standard_CNN, reports the best performance with accuracy equal to 1. Full article
(This article belongs to the Special Issue Human-Computer Interactions in E-health)
Show Figures

Figure 1

18 pages, 487 KiB  
Article
NLOCL: Noise-Labeled Online Continual Learning
by Kan Cheng, Yongxin Ma, Guanglu Wang, Linlin Zong and Xinyue Liu
Electronics 2024, 13(13), 2560; https://doi.org/10.3390/electronics13132560 - 29 Jun 2024
Viewed by 937
Abstract
Continual learning (CL) from infinite data streams has become a challenge for neural network models in real-world scenarios. Catastrophic forgetting of previous knowledge occurs in this learning setting, and existing supervised CL methods rely excessively on accurately labeled samples. However, the real-world data [...] Read more.
Continual learning (CL) from infinite data streams has become a challenge for neural network models in real-world scenarios. Catastrophic forgetting of previous knowledge occurs in this learning setting, and existing supervised CL methods rely excessively on accurately labeled samples. However, the real-world data labels are usually misled by noise, which influences the CL agents and aggravates forgetting. To address this problem, we propose a method named noise-labeled online continual learning (NLOCL), which implements the online CL model with noise-labeled data streams. NLOCL uses an empirical replay strategy to retain crucial examples, separates data streams by small-loss criteria, and includes semi-supervised fine-tuning for labeled and unlabeled samples. Besides, NLOCL combines small loss with class diversity measures and eliminates online memory partitioning. Furthermore, we optimized the experience replay stage to enhance the model performance by retaining significant clean-labeled examples and carefully selecting suitable samples. In the experiment, we designed noise-labeled data streams by injecting noisy labels into multiple datasets and partitioning tasks to simulate infinite data streams realistically. The experimental results demonstrate the superior performance and robust learning capabilities of our proposed method. Full article
(This article belongs to the Special Issue Emerging Theory and Applications in Natural Language Processing)
Show Figures

Figure 1

18 pages, 7980 KiB  
Article
Evaluation of Two Digital Wound Area Measurement Methods Using a Non-Randomized, Single-Center, Controlled Clinical Trial
by Lorena Casanova-Lozano, David Reifs-Jiménez, Maria del Mar Martí-Ejarque, Ramon Reig-Bolaño and Sergi Grau-Carrión
Electronics 2024, 13(12), 2390; https://doi.org/10.3390/electronics13122390 - 18 Jun 2024
Viewed by 1588
Abstract
A prospective, single-center, non-randomized, pre-marketing clinical investigation was conducted with a single group of subjects to collect skin lesion images. These images were subsequently utilized to compare the results obtained from a traditional method of wound size measurement with two novel methods developed [...] Read more.
A prospective, single-center, non-randomized, pre-marketing clinical investigation was conducted with a single group of subjects to collect skin lesion images. These images were subsequently utilized to compare the results obtained from a traditional method of wound size measurement with two novel methods developed using Machine Learning (ML) approaches. Both proposed methods automatically calculate the wound area from an image. One method employs a two-dimensional system with the assistance of an external calibrator, while the other utilizes an Augmented Reality (AR) system, eliminating the need for a physical calibration object. To validate the correlation between these methods, a gold standard measurement with digital planimetry was employed. A total of 67 wound images were obtained from 41 patients between 22 November 2022 and 10 February 2023. The conducted pre-marketing clinical investigation demonstrated that the ML algorithms are safe for both the intended user and the intended target population. They exhibit a high correlation with the gold standard method and are more accurate than traditional methods. Additionally, they meet the manufacturer’s expected use. The study validated the performance, safety, and usability of the implemented methods as a valuable tool in the measurement of skin lesions. Full article
(This article belongs to the Special Issue Artificial Intelligence and Signal Processing: Circuits and Systems)
Show Figures

Figure 1

9 pages, 3241 KiB  
Article
Mobility Extraction Using Improved Resistance Partitioning Methodology for Normally-OFF Fully Vertical GaN Trench MOSFETs
by Valentin Ackermann, Blend Mohamad, Hala El Rammouz, Vishwajeet Maurya, Eric Frayssinet, Yvon Cordier, Matthew Charles, Gauthier Lefevre, Julien Buckley and Bassem Salem
Electronics 2024, 13(12), 2350; https://doi.org/10.3390/electronics13122350 - 15 Jun 2024
Viewed by 1689
Abstract
In this work, fully vertical GaN trench MOSFETs were fabricated and characterized to evaluate their electrical performances. Transistors show a normally-OFF behavior with a high ION/IOFF (~109) ratio and a significantly small gate leakage current (10−11 A/mm). [...] Read more.
In this work, fully vertical GaN trench MOSFETs were fabricated and characterized to evaluate their electrical performances. Transistors show a normally-OFF behavior with a high ION/IOFF (~109) ratio and a significantly small gate leakage current (10−11 A/mm). Thanks to an improved resistance partitioning method, the resistances of the trench bottom and trench channel were extracted accurately by taking into account different charging conditions. This methodology enabled an estimation of the effective channel and bottom mobility of 11.1 cm2/V·s and 15.1 cm2/V·s, respectively. Full article
(This article belongs to the Special Issue Wide-Bandgap Device Application: Devices, Circuits, and Drivers)
Show Figures

Figure 1

13 pages, 2270 KiB  
Perspective
Challenges: ESD Protection for Heterogeneously Integrated SoICs in Advanced Packaging
by Zijin Pan, Xunyu Li, Weiquan Hao, Runyu Miao, Zijian Yue and Albert Wang
Electronics 2024, 13(12), 2341; https://doi.org/10.3390/electronics13122341 - 15 Jun 2024
Cited by 1 | Viewed by 4703
Abstract
Electrostatic discharge (ESD) failure is a major reliability problem for all forms of microelectronics products. ESD protection is required for all integrated circuits (ICs). As dimension scaling-down approaches its physical limit, heterogeneous integration (HI) emerges as a main pathway towards the age beyond [...] Read more.
Electrostatic discharge (ESD) failure is a major reliability problem for all forms of microelectronics products. ESD protection is required for all integrated circuits (ICs). As dimension scaling-down approaches its physical limit, heterogeneous integration (HI) emerges as a main pathway towards the age beyond Moore’s Law to facilitate advanced microsystem chips with extreme performance and rich functionalities. Advanced packaging is a key requirement for HI-enabled integrated systems-on-chiplets (SoIC) that require robust ESD protection solutions. This article outlines key emerging technical challenges associated with smart future SoIC microsystem superchips in the context of advanced packaging technologies. Full article
(This article belongs to the Special Issue Advanced Electronic Packaging Technology)
Show Figures

Figure 1

16 pages, 318 KiB  
Article
DPShield: Optimizing Differential Privacy for High-Utility Data Analysis in Sensitive Domains
by Pratik Thantharate, Shyam Bhojwani and Anurag Thantharate
Electronics 2024, 13(12), 2333; https://doi.org/10.3390/electronics13122333 - 14 Jun 2024
Cited by 3 | Viewed by 1267
Abstract
The proliferation of cloud computing has amplified the need for robust privacy-preserving technologies, particularly when dealing with sensitive financial and human resources (HR) data. However, traditional differential privacy methods often struggle to balance rigorous privacy protections with maintaining data utility. This study introduces [...] Read more.
The proliferation of cloud computing has amplified the need for robust privacy-preserving technologies, particularly when dealing with sensitive financial and human resources (HR) data. However, traditional differential privacy methods often struggle to balance rigorous privacy protections with maintaining data utility. This study introduces DPShield, an optimized adaptive framework that enhances the trade-off between privacy guarantees and data utility in cloud environments. DPShield leverages advanced differential privacy techniques, including dynamic noise-injection mechanisms tailored to data sensitivity, cumulative privacy loss tracking, and domain-specific optimizations. Through comprehensive evaluations on synthetic financial and real-world HR datasets, DPShield demonstrated a remarkable 21.7% improvement in aggregate query accuracy over existing differential privacy approaches. Moreover, it maintained machine learning model accuracy within 5% of non-private benchmarks, ensuring high utility for predictive analytics. These achievements signify a major advancement in differential privacy, offering a scalable solution that harmonizes robust privacy assurances with practical data analysis needs. DPShield’s domain adaptability and seamless integration with cloud architectures underscore its potential as a versatile privacy-enhancing tool. This work bridges the gap between theoretical privacy guarantees and practical implementation demands, paving the way for more secure, ethical, and insightful data usage in cloud computing environments. Full article
(This article belongs to the Special Issue Artificial Intelligence and Applications—Responsible AI)
Show Figures

Figure 1

17 pages, 3222 KiB  
Article
Dynamic Difficulty Adaptation Based on Stress Detection for a Virtual Reality Video Game: A Pilot Study
by Carmen Elisa Orozco-Mora, Rita Q. Fuentes-Aguilar and Gustavo Hernández-Melgarejo
Electronics 2024, 13(12), 2324; https://doi.org/10.3390/electronics13122324 - 14 Jun 2024
Cited by 1 | Viewed by 2125
Abstract
Virtual reality (VR) is continuing to grow as more affordable technological devices become available. Video games are one of the most profitable applications, while rehabilitation has the most significant social impact. Both applications require a proper user evaluation to provide personalized experiences that [...] Read more.
Virtual reality (VR) is continuing to grow as more affordable technological devices become available. Video games are one of the most profitable applications, while rehabilitation has the most significant social impact. Both applications require a proper user evaluation to provide personalized experiences that avoid boring or stressful situations. Despite the successful applications, there are several opportunities to improve the field of human–machine interactions, one of the most popular ones being the use of affect detection to create personalized experiences. In that sense, this study presents the implementation of two dynamic difficulty adaptation strategies. The person’s affective state is estimated through a machine learning classification model, which later serves to adapt the difficulty of the video game online. The results show that it is possible to maintain the user at a given difficulty level, which is analogous to achieving the well-known flow state. Among the two implemented strategies, no statistical differences were found in the workload induced by the users. However, more physical demands and a higher frustration were induced by one of the strategies, validated with the recorded muscular activity. The results obtained contribute to the state of the art of DDA strategies in virtual reality driven by affective data. Full article
Show Figures

Figure 1

17 pages, 1591 KiB  
Article
Understanding Learner Satisfaction in Virtual Learning Environments: Serial Mediation Effects of Cognitive and Social-Emotional Factors
by Xin Yin, Jiakai Zhang, Gege Li and Heng Luo
Electronics 2024, 13(12), 2277; https://doi.org/10.3390/electronics13122277 - 10 Jun 2024
Cited by 1 | Viewed by 3042
Abstract
This study explored the relationship between technology acceptance and learning satisfaction within a virtual learning environment (VLE) with cognitive presence, cognitive engagement, social presence, and emotional engagement as mediators. A total of 237 university students participated and completed a questionnaire after studying in [...] Read more.
This study explored the relationship between technology acceptance and learning satisfaction within a virtual learning environment (VLE) with cognitive presence, cognitive engagement, social presence, and emotional engagement as mediators. A total of 237 university students participated and completed a questionnaire after studying in the Virbela VLE. The results revealed direct and indirect links between technology acceptance and virtual learning satisfaction. The mediation analysis showed the critical mediating roles of cognitive presence and emotional engagement in fostering satisfaction. There also appeared to be a sequential mediating pathway from technology acceptance to learning satisfaction through social presence and emotional engagement. Notably, cognitive engagement and social presence did not have a significant mediating effect on satisfaction. These results provide a supplementary perspective on how technological, cognitive, and emotional factors can enhance student satisfaction in VLEs. The study concludes with several implications for future research and practice of VLEs in higher education. Full article
Show Figures

Figure 1

16 pages, 2241 KiB  
Article
A Simple Thermal Model for Junction and Hot Spot Temperature Estimation of 650 V GaN HEMT during Short Circuit
by Simone Palazzo, Annunziata Sanseverino, Giovanni Canale Parola, Emanuele Martano, Francesco Velardi and Giovanni Busatto
Electronics 2024, 13(11), 2189; https://doi.org/10.3390/electronics13112189 - 4 Jun 2024
Viewed by 1159
Abstract
Temperature is a critical parameter for the GaN HEMT as it sharply impacts the electrical characteristics of the device more than for SiC or Si MOSFETs. Either when designing a power converter or testing a device for reliability and robustness characterizations, it is [...] Read more.
Temperature is a critical parameter for the GaN HEMT as it sharply impacts the electrical characteristics of the device more than for SiC or Si MOSFETs. Either when designing a power converter or testing a device for reliability and robustness characterizations, it is essential to estimate the junction temperature of the device. For this aim, manufacturers provide compact models to simulate the device in SPICE-based simulators. These models provide the junction temperature, which is considered uniform along the channel. We demonstrate through two-dimensional numerical simulations that this approach is not suitable when the device undergoes high electrothermal stress, such as during short circuit (SC), when the temperature distribution along the channel is strongly not uniform. Based on numerical simulations and experimental measurements on a 650 V/4 A GaN HEMT, we derived a thermal network suitable for SPICE simulations to correctly compute the junction temperature and the SC current, even if not providing information about the possible failure of the device due to the formation of a local hot spot. For this reason, we used a second thermal network to estimate the maximum temperature reached inside the device, whose results are in good agreement with the experimental observed failures. Full article
(This article belongs to the Special Issue Nitride Semiconductor Devices and Applications)
Show Figures

Figure 1

18 pages, 3974 KiB  
Article
Curved Domains in Magnetics: A Virtual Element Method Approach for the T.E.A.M. 25 Benchmark Problem
by Franco Dassi, Paolo Di Barba and Alessandro Russo
Electronics 2024, 13(11), 2053; https://doi.org/10.3390/electronics13112053 - 24 May 2024
Cited by 1 | Viewed by 1200
Abstract
In this paper, we are interested in solving optimal shape design problems. A critical challenge within this framework is generating the mesh of the computational domain at each optimisation step according to the information provided by the minimising functional. To enhance efficiency, we [...] Read more.
In this paper, we are interested in solving optimal shape design problems. A critical challenge within this framework is generating the mesh of the computational domain at each optimisation step according to the information provided by the minimising functional. To enhance efficiency, we propose a strategy based on the Finite Element Method (FEM) and the Virtual Element Method (VEM). Specifically, we exploit the flexibility of the VEM in dealing with generally shaped polygons, including those with hanging nodes, to update the mesh solely in regions where the shape varies. In the remaining parts of the domain, we employ the FEM, known for its robustness and applicability in such scenarios. We numerically validate the proposed approach on the T.E.A.M. 25 benchmark problem and compare the results obtained with this procedure with those proposed in the literature based solely on the FEM. Moreover, since the T.E.A.M. 25 benchmark problem is also characterised by curved shapes, we utilise the VEM to accurately incorporate these “exact” curves into the discrete solution itself. Full article
(This article belongs to the Section Microelectronics)
Show Figures

Figure 1

17 pages, 15053 KiB  
Article
Encryption Method for JPEG Bitstreams for Partially Disclosing Visual Information
by Mare Hirose, Shoko Imaizumi and Hitoshi Kiya
Electronics 2024, 13(11), 2016; https://doi.org/10.3390/electronics13112016 - 22 May 2024
Cited by 1 | Viewed by 1409
Abstract
In this paper, we propose a novel encryption method for JPEG bitstreams in which encrypted data can preserve the JPEG file format with the same size as that without encryption. Accordingly, data encrypted with the method can be decoded without any modification of [...] Read more.
In this paper, we propose a novel encryption method for JPEG bitstreams in which encrypted data can preserve the JPEG file format with the same size as that without encryption. Accordingly, data encrypted with the method can be decoded without any modification of header information by using a standard JPEG decoder. In addition, the method makes two contributions that conventional methods allowing bitstream-level encryption do not: spatially partial encryption and block-permutation-based encryption. To achieve this, we propose using a code called restart marker for the first time, which can be inserted at regular intervals between minimum coded units (MCUs) for encryption. This allows us to define extended blocks separated by restart markers, so the two contributions are possible with restart markers. In experiments, the effectiveness of the method is verified in terms of file size preservation and the visibility of encrypted images. Full article
(This article belongs to the Special Issue Modern Computer Vision and Image Analysis)
Show Figures

Figure 1

21 pages, 4639 KiB  
Article
Enhancing Learning of 3D Model Unwrapping through Virtual Reality Serious Game: Design and Usability Validation
by Bruno Rodriguez-Garcia, José Miguel Ramírez-Sanz, Ines Miguel-Alonso and Andres Bustillo
Electronics 2024, 13(10), 1972; https://doi.org/10.3390/electronics13101972 - 17 May 2024
Cited by 1 | Viewed by 2117
Abstract
Given the difficulty of explaining the unwrapping process through traditional teaching methodologies, this article presents the design, development, and validation of an immersive Virtual Reality (VR) serious game, named Unwrap 3D Virtual: Ready (UVR), aimed at facilitating the learning of unwrapping 3D models. [...] Read more.
Given the difficulty of explaining the unwrapping process through traditional teaching methodologies, this article presents the design, development, and validation of an immersive Virtual Reality (VR) serious game, named Unwrap 3D Virtual: Ready (UVR), aimed at facilitating the learning of unwrapping 3D models. The game incorporates animations to aid users in understanding the unwrapping process, following Mayer’s Cognitive Theory of Multimedia Learning and Gamification principles. Structured into four levels of increasing complexity, users progress through different aspects of 3D model unwrapping, with the final level allowing for result review. A sample of 53 students with experience in 3D modeling was categorized based on device (PC or VR) and previous experience (XP) in VR, resulting in Low-XP, Mid-XP, and High-XP groups. Hierarchical clustering identified three clusters, reflecting varied user behaviors. Results from surveys assessing game experience, presence, and satisfaction show higher immersion reported by VR users despite greater satisfaction being observed in the PC group due to a bug in the VR version. Novice users exhibited higher satisfaction, which was attributed to the novelty effect, while experienced users demonstrated greater control and proficiency. Full article
(This article belongs to the Special Issue Serious Games and Extended Reality (XR))
Show Figures

Figure 1

10 pages, 3346 KiB  
Article
Correlation between CO2 Sensitivity and Channel-Layer Thickness in In2O3 Thin-Film Transistor Gas Sensors
by Ayumu Nodera, Ryota Kobayashi, Tsubasa Kobayashi and Shinya Aikawa
Electronics 2024, 13(10), 1947; https://doi.org/10.3390/electronics13101947 - 16 May 2024
Viewed by 1595
Abstract
CO2 monitoring is important for achieving net-zero emissions. Here, we report on a CO2 gas sensor based on an In2O3 thin-film transistor (TFT), which is expected to realize both low-temperature operation and high sensitivity. The effect of channel [...] Read more.
CO2 monitoring is important for achieving net-zero emissions. Here, we report on a CO2 gas sensor based on an In2O3 thin-film transistor (TFT), which is expected to realize both low-temperature operation and high sensitivity. The effect of channel thickness on TFT performance is well known; however, its effect on CO2 sensitivity has not been fully investigated. We fabricated In2O3 TFTs of various thicknesses to evaluate the effect of channel thickness on CO2 sensitivity. Consequently, TFT gas sensors with thinner channels exhibited higher CO2 sensitivity. This is because the surface effect is more prominent for a thinner film, suggesting that charge transfer between gas molecules and the channel surface through gas adsorption has a significant impact on changes in the TFT parameters in the subthreshold region. The results showed that the In2O3 TFT in thin channels is a promising candidate for CO2-sensitive TFT gas sensors and is useful for understanding an effect of gas adsorption in oxide TFTs with a very thin channel as well. Full article
(This article belongs to the Special Issue Feature Papers in Semiconductor Devices)
Show Figures

Figure 1

12 pages, 710 KiB  
Article
Personalized Feedback in Massive Open Online Courses: Harnessing the Power of LangChain and OpenAI API
by Miguel Morales-Chan, Hector R. Amado-Salvatierra, José Amelio Medina, Roberto Barchino, Rocael Hernández-Rizzardini and António Moreira Teixeira
Electronics 2024, 13(10), 1960; https://doi.org/10.3390/electronics13101960 - 16 May 2024
Cited by 4 | Viewed by 2730
Abstract
Studies show that feedback greatly improves student learning outcomes, but achieving this level of personalization at scale is a complex task, especially in the diverse and open environment of Massive Open Online Courses (MOOCs). This research provides a novel method for using cutting-edge [...] Read more.
Studies show that feedback greatly improves student learning outcomes, but achieving this level of personalization at scale is a complex task, especially in the diverse and open environment of Massive Open Online Courses (MOOCs). This research provides a novel method for using cutting-edge artificial intelligence technology to enhance the feedback mechanism in MOOCs. The main goal of this research is to leverage AI’s capabilities to automate and refine the MOOC feedback process, with special emphasis on courses that allow students to learn at their own pace. The combination of LangChain—a cutting-edge framework specifically designed for applications that use language models—with the OpenAI API forms the basis of this work. This integration creates dynamic, scalable, and intelligent environments that can provide students with individualized, insightful feedback. A well-organized assessment rubric directs the feedback system, ensuring that the responses are both tailored to each learner’s unique path and aligned with academic standards and objectives. This initiative uses Generative AI to enhance MOOCs, making them more engaging, responsive, and successful for a diverse, international student body. Beyond mere automation, this technology has the potential to transform fundamentally how learning is supported in digital environments and how feedback is delivered. The initial results demonstrate increased learner satisfaction and progress, thereby validating the effectiveness of personalized feedback powered by AI. Full article
Show Figures

Figure 1

11 pages, 4986 KiB  
Article
A Multiplexing Optical Temperature Sensing System for Induction Motors Using Few-Mode Fiber Spatial Mode Diversity
by Feng Liu, Tianle Gu and Weicheng Chen
Electronics 2024, 13(10), 1932; https://doi.org/10.3390/electronics13101932 - 15 May 2024
Cited by 2 | Viewed by 990
Abstract
Induction motors are widely applied in motor drive systems. Effective temperature monitoring is one of the keys to ensuring the reliability and optimal performance of the motors. Therefore, this paper introduces a multiplexed optical temperature sensing system for induction motors based on few-mode [...] Read more.
Induction motors are widely applied in motor drive systems. Effective temperature monitoring is one of the keys to ensuring the reliability and optimal performance of the motors. Therefore, this paper introduces a multiplexed optical temperature sensing system for induction motors based on few-mode fiber (FMF) spatial mode diversity. By using the spatial mode dimension of FMF, fiber Bragg grating (FBG) carried by different spatial modes of optical paths is embedded in different positions of the motor to realize multipoint synchronous multiplexing temperature monitoring. The paper establishes and demonstrates a photonic lantern-based mode division sensing system for motor temperature monitoring. As a proof of concept, the system demonstrates experiments in multiplexed temperature sensing for motor stators using the fundamental mode LP01 and high-order spatial modes LP11, LP21, and LP02. The FBG sensitivity carried by the above mode is 0.0107 nm/°C, 0.0106 nm/°C, 0.0097 nm/°C, and 0.0116 nm/°C, respectively. The dynamic temperature changes in the stator at different positions of the motor under speeds of 1k rpm, 1.5k rpm, 2k rpm with no load, 3 kg load, and 5 kg load, as well as at three specific speed–load combinations of 1.5k rpm_3 kg, 1k rpm_0kg, 2k rpm_5 kg and so on are measured, and the measured results of different spatial modes are compared and analyzed. The findings indicate that different spatial modes can accurately reflect temperature variations at various positions in motor stator winding. Full article
(This article belongs to the Special Issue Sensing Technology and Intelligent Application)
Show Figures

Figure 1

16 pages, 6397 KiB  
Article
Selecting the Best Permanent Magnet Synchronous Machine Design for Use in a Small Wind Turbine
by Marcin Lefik, Anna Firych-Nowacka, Michal Lipian, Malgorzata Brzozowska and Tomasz Smaz
Electronics 2024, 13(10), 1929; https://doi.org/10.3390/electronics13101929 - 15 May 2024
Cited by 1 | Viewed by 3691
Abstract
The article describes the selection of a permanent magnet synchronous machine design that could be implemented in a small wind turbine designed by the GUST student organization together with researchers working at the Technical University of Lodz. Based on measurements of the characteristics [...] Read more.
The article describes the selection of a permanent magnet synchronous machine design that could be implemented in a small wind turbine designed by the GUST student organization together with researchers working at the Technical University of Lodz. Based on measurements of the characteristics of available machines, eight initial designs of machines with different rotor designs were proposed. The size of the stator, the number of pairs of poles, and the dimensions of the magnets were used as initial parameters of the designed machines. The analysis was carried out about the K-index, the so-called index of benefits. The idea was to make the selected design as efficient as possible while keeping production costs and manufacturing time low. This paper describes how to select the best design of a permanent magnet synchronous generator intended to work with a small wind turbine. All generator parameters were selected keeping in mind the competition requirements, as the designed generator will be used in the author’s wind turbine. Based on the determined characteristics of the generator variants and the value of the K-index, a generator with a latent magnet rotor was selected as the best solution. The aforementioned K-index is a proprietary concept developed for the selection of the most suitable generator design. This paper did not use optimization methods; the analysis was only supported by the K-index. Full article
Show Figures

Figure 1

18 pages, 64491 KiB  
Article
A 5K Efficient Low-Light Enhancement Model by Estimating Increment between Dark Image and Transmission Map Based on Local Maximum Color Value Prior
by Qikang Deng, Dongwon Choo, Hyochul Ji and Dohoon Lee
Electronics 2024, 13(10), 1814; https://doi.org/10.3390/electronics13101814 - 8 May 2024
Cited by 1 | Viewed by 1506
Abstract
Low-light enhancement (LLE) has seen significant advancements over decades, leading to substantial improvements in image quality that even surpass ground truth. However, these advancements have come with a downside as the models grew in size and complexity, losing their lightweight and real-time capabilities [...] Read more.
Low-light enhancement (LLE) has seen significant advancements over decades, leading to substantial improvements in image quality that even surpass ground truth. However, these advancements have come with a downside as the models grew in size and complexity, losing their lightweight and real-time capabilities crucial for applications like surveillance, autonomous driving, smartphones, and unmanned aerial vehicles (UAVs). To address this challenge, we propose an exceptionally lightweight model with just around 5K parameters, which is capable of delivering high-quality LLE results. Our method focuses on estimating the incremental changes from dark images to transmission maps based on the low maximum color value prior, and we introduce a novel three-channel transmission map to capture more details and information compared to the traditional one-channel transmission map. This innovative design allows for more effective matching of incremental estimation results, enabling distinct transmission adjustments to be applied to the R, G, and B channels of the image. This streamlined approach ensures that our model remains lightweight, making it suitable for deployment on low-performance devices without compromising real-time performance. Our experiments confirm the effectiveness of our model, achieving high-quality LLE comparable to the IAT (local) model. Impressively, our model achieves this level of performance while utilizing only 0.512 GFLOPs and 4.7K parameters, representing just 39.1% of the GFLOPs and 23.5% of the parameters used by the IAT (local) model. Full article
Show Figures

Figure 1

19 pages, 2212 KiB  
Article
Design and Development of Multi-Agent Reinforcement Learning Intelligence on the Robotarium Platform for Embedded System Applications
by Lorenzo Canese, Gian Carlo Cardarilli, Mohammad Mahdi Dehghan Pir, Luca Di Nunzio and Sergio Spanò
Electronics 2024, 13(10), 1819; https://doi.org/10.3390/electronics13101819 - 8 May 2024
Cited by 18 | Viewed by 1836
Abstract
This research explores the use of Q-Learning for real-time swarm (Q-RTS) multi-agent reinforcement learning (MARL) algorithm for robotic applications. This study investigates the efficacy of Q-RTS in the reducing convergence time to a satisfactory movement policy through the successful implementation of four and [...] Read more.
This research explores the use of Q-Learning for real-time swarm (Q-RTS) multi-agent reinforcement learning (MARL) algorithm for robotic applications. This study investigates the efficacy of Q-RTS in the reducing convergence time to a satisfactory movement policy through the successful implementation of four and eight trained agents. Q-RTS has been shown to significantly reduce search time in terms of training iterations, from almost a million iterations with one agent to 650,000 iterations with four agents and 500,000 iterations with eight agents. The scalability of the algorithm was addressed by testing it on several agents’ configurations. A central focus was placed on the design of a sophisticated reward function, considering various postures of the agents and their critical role in optimizing the Q-learning algorithm. Additionally, this study delved into the robustness of trained agents, revealing their ability to adapt to dynamic environmental changes. The findings have broad implications for improving the efficiency and adaptability of robotic systems in various applications such as IoT and embedded systems. The algorithm was tested and implemented using the Georgia Tech Robotarium platform, showing its feasibility for the above-mentioned applications. Full article
(This article belongs to the Special Issue Applied Machine Learning in Intelligent Systems)
Show Figures

Figure 1

15 pages, 5670 KiB  
Article
Shaping of the Frequency Response of Photoacoustic Cells with Multi-Cavity Structures
by Wiktor Porakowski and Tomasz Starecki
Electronics 2024, 13(9), 1786; https://doi.org/10.3390/electronics13091786 - 6 May 2024
Viewed by 1436
Abstract
In the great majority of cases, the design of resonant photoacoustic cells is based on the use of resonators excited at the frequencies of their main resonances. This work presents a solution in which the use of a multi-cavity structure with the appropriate [...] Read more.
In the great majority of cases, the design of resonant photoacoustic cells is based on the use of resonators excited at the frequencies of their main resonances. This work presents a solution in which the use of a multi-cavity structure with the appropriate selection of the mechanical parameters of the cavities and the interconnecting ducts allows for the shaping of the frequency response of the cell. Such solutions may be particularly useful when the purpose of the designed cells is operation at multiple frequencies, e.g., in applications with the simultaneous detection of multiple gaseous compounds. The concept is tested with cells made using 3D printing technology. The measured frequency responses of the tested cells show very good agreement with the simulation results. This allows for an approach in which the development of a cell with the desired frequency response can be initially based on modeling, without the need for the time-consuming and expensive process of manufacturing and measuring numerous modifications of the cell. Full article
Show Figures

Figure 1

22 pages, 5575 KiB  
Article
Advancing into Millimeter Wavelengths for IoT: Multibeam Modified Planar Luneburg Lens Antenna with Porous Plastic Material
by Javad Pourahmadazar, Bal S. Virdee and Tayeb A. Denidni
Electronics 2024, 13(9), 1605; https://doi.org/10.3390/electronics13091605 - 23 Apr 2024
Cited by 1 | Viewed by 1668
Abstract
This paper introduces an innovative antenna design utilizing a cylindrical dielectric Luneburg lens tailored for 60 GHz Internet of Things (IoT) applications. To optimize V-band communications, the permittivity of the dielectric medium is strategically adjusted by precisely manipulating the physical porosity. In IoT [...] Read more.
This paper introduces an innovative antenna design utilizing a cylindrical dielectric Luneburg lens tailored for 60 GHz Internet of Things (IoT) applications. To optimize V-band communications, the permittivity of the dielectric medium is strategically adjusted by precisely manipulating the physical porosity. In IoT scenarios, employing a microstrip dipole antenna with an emission pattern resembling cos10 enhances beam illumination within the waveguide, thereby improving communication and sensing capabilities. The refractive index gradient of the Luneburg lens is modified by manipulating the material’s porosity using air holes, prioritizing signal accuracy and reliability. Fabricated with polyimide using 3D printing, the proposed antenna features a slim profile ideal for IoT applications with space constraints, such as smart homes and unmanned aerial vehicles. Its innovative design is underscored by selective laser sintering (SLS), offering scalable and cost-effective production. Measured results demonstrate the antenna’s exceptional performance, surpassing IoT deployment standards. This pioneering approach to designing multibeam Luneburg lens antennas, leveraging 3D printing’s porosity control for millimeter-wave applications, represents a significant advancement in antenna technology with scanning ability between −67 and 67 degrees. It paves the way for enhanced IoT infrastructure characterized by advanced sensing capabilities and improved connectivity. Full article
(This article belongs to the Special Issue Antennas for IoT Devices)
Show Figures

Figure 1

14 pages, 6484 KiB  
Article
Unveiling Acoustic Cavitation Characterization in Opaque Chambers through a Low-Cost Piezoelectric Sensor Approach
by José Fernandes, Paulo J. Ramísio and Hélder Puga
Electronics 2024, 13(8), 1581; https://doi.org/10.3390/electronics13081581 - 20 Apr 2024
Cited by 4 | Viewed by 2056
Abstract
This study investigates the characterization of acoustic cavitation in a water-filled, opaque chamber induced by ultrasonic waves at 20 kHz. It examines the effect of different acoustic radiator geometries on cavitation generation across varying electrical power levels. A cost-effective piezoelectric sensor, precisely positioned, [...] Read more.
This study investigates the characterization of acoustic cavitation in a water-filled, opaque chamber induced by ultrasonic waves at 20 kHz. It examines the effect of different acoustic radiator geometries on cavitation generation across varying electrical power levels. A cost-effective piezoelectric sensor, precisely positioned, quantifies cavitation under assorted power settings. Two acoustic radiator shape configurations, one with holes and another solid, were examined. The piezoelectric sensor demonstrated efficacy, corroborating with existing literature, in measuring acoustic cavitation. This was achieved through the Fast Fourier Transform (FFT) analysis of voltage data, specifically targeting sub-harmonic patterns, thereby providing a robust method for cavitation detection. Results demonstrate that perforated geometries enhance cavitation intensity at lower power levels, while solid shapes predominantly affect cavitation axially, exhibiting decreased activity at minimal power. The findings recommend using two different shape geometries on the acoustic radiator for efficient cavitation detection, highlighting intense cavitation on radial walls and cavitation generation on the bottom. Due to the stochastic nature of cavitation, averaging data is critical. The spatial limitation of the sensor necessitates prioritizing specific areas over complete coverage, with multiple sensors recommended for comprehensive cavitation pattern analysis. Full article
Show Figures

Figure 1

9 pages, 5267 KiB  
Communication
Monolithically Integrated GaN Power Stage for More Sustainable 48 V DC–DC Converters
by Michael Basler, Stefan Mönch, Richard Reiner, Fouad Benkhelifa and Rüdiger Quay
Electronics 2024, 13(7), 1351; https://doi.org/10.3390/electronics13071351 - 3 Apr 2024
Cited by 2 | Viewed by 1511
Abstract
In this article, a fully monolithically integrated GaN power stage with a half-bridge, driver, level shifter, dead time and voltage mode control for 48 V DC–DC converters is proposed and analyzed. The design of the GaN IC is presented in detail, and measurements [...] Read more.
In this article, a fully monolithically integrated GaN power stage with a half-bridge, driver, level shifter, dead time and voltage mode control for 48 V DC–DC converters is proposed and analyzed. The design of the GaN IC is presented in detail, and measurements of the single function blocks and the DC–DC converter up to 48 V are shown. Finally, considerations are given on a life cycle assessment with regard to the GaN power integration. This GaN power IC or stage demonstrates a higher level of integration, resulting in a reduced bill of materials and therefore lower climate impact. Full article
Show Figures

Figure 1

13 pages, 3035 KiB  
Article
Anomaly Detection in Connected and Autonomous Vehicle Trajectories Using LSTM Autoencoder and Gaussian Mixture Model
by Boyu Wang, Wan Li and Zulqarnain H. Khattak
Electronics 2024, 13(7), 1251; https://doi.org/10.3390/electronics13071251 - 28 Mar 2024
Cited by 5 | Viewed by 2898
Abstract
Connected and Autonomous Vehicles (CAVs) technology has the potential to transform the transportation system. Although these new technologies have many advantages, the implementation raises significant concerns regarding safety, security, and privacy. Anomalies in sensor data caused by errors or cyberattacks can cause severe [...] Read more.
Connected and Autonomous Vehicles (CAVs) technology has the potential to transform the transportation system. Although these new technologies have many advantages, the implementation raises significant concerns regarding safety, security, and privacy. Anomalies in sensor data caused by errors or cyberattacks can cause severe accidents. To address the issue, this study proposed an innovative anomaly detection algorithm, namely the LSTM Autoencoder with Gaussian Mixture Model (LAGMM). This model supports anomalous CAV trajectory detection in the real-time leveraging communication capabilities of CAV sensors. The LSTM Autoencoder is applied to generate low-rank representations and reconstruct errors for each input data point, while the Gaussian Mixture Model (GMM) is employed for its strength in density estimation. The proposed model was jointly optimized for the LSTM Autoencoder and GMM simultaneously. The study utilizes realistic CAV data from a platooning experiment conducted for Cooperative Automated Research Mobility Applications (CARMAs). The experiment findings indicate that the proposed LAGMM approach enhances detection accuracy by 3% and precision by 6.4% compared to the existing state-of-the-art methods, suggesting a significant improvement in the field. Full article
(This article belongs to the Special Issue Vehicle Technologies for Sustainable Smart Cities and Societies)
Show Figures

Figure 1

14 pages, 27254 KiB  
Article
GAN-Based Data Augmentation with Vehicle Color Changes to Train a Vehicle Detection CNN
by Aroona Ayub and HyungWon Kim
Electronics 2024, 13(7), 1231; https://doi.org/10.3390/electronics13071231 - 26 Mar 2024
Cited by 7 | Viewed by 1624
Abstract
Object detection is a challenging task that requires a lot of labeled data to train convolutional neural networks (CNNs) that can achieve human-level accuracy. However, such data are not easy to obtain, as they involve significant manual work and costs to annotate the [...] Read more.
Object detection is a challenging task that requires a lot of labeled data to train convolutional neural networks (CNNs) that can achieve human-level accuracy. However, such data are not easy to obtain, as they involve significant manual work and costs to annotate the objects in images. Researchers have used traditional data augmentation techniques to increase the amount of training data available to them. A recent trend in object detection is to use generative models to automatically create annotated data that can enrich a training set and improve the performance of the target model. This paper presents a method of training the proposed ColorGAN network, which is used to generate augmented data for the target domain of interest with the least compromise in quality. We demonstrate a method to train a GAN with images of vehicles in different colors. Then, we demonstrate that our ColorGAN can change the color of vehicles of any given vehicle dataset to a set of specified colors, which can serve as an augmented training dataset. Our experimental results show that the augmented dataset generated by the proposed method helps enhance the detection performance of a CNN for applications where the original training data are limited. Our experiments also show that the model can achieve a higher mAP of 76% when the model is trained with augmented images along with the original training dataset. Full article
(This article belongs to the Special Issue New Trends in Artificial Neural Networks and Its Applications)
Show Figures

Figure 1

15 pages, 6550 KiB  
Article
FI-NPI: Exploring Optimal Control in Parallel Platform Systems
by Ruiyang Wang, Qiuxiang Gu, Siyu Lu, Jiawei Tian, Zhengtong Yin, Lirong Yin and Wenfeng Zheng
Electronics 2024, 13(7), 1168; https://doi.org/10.3390/electronics13071168 - 22 Mar 2024
Cited by 59 | Viewed by 1529
Abstract
Typically, the current and speed loop closure of servo motor of the parallel platform is accomplished with incremental PI regulation. The control method has strong robustness, but the parameter tuning process is cumbersome, and it is difficult to achieve the optimal control state. [...] Read more.
Typically, the current and speed loop closure of servo motor of the parallel platform is accomplished with incremental PI regulation. The control method has strong robustness, but the parameter tuning process is cumbersome, and it is difficult to achieve the optimal control state. In order to further optimize the performance, this paper proposes a double-loop control structure based on fuzzy integral and neuron proportional integral (FI-NPI). The structure makes full use of the control advantages of the fuzzy controller and integrator to improve the performance of speed closed-loop control. And through the feedforward branch, the speed error is used as the teacher signal for neuron supervised learning, which improves the effect of current closed-loop control. Through comparative simulation experiments, this paper verifies that the FI-NPI controller has a faster dynamic response speed than the traditional PI controller. Finally, in this paper, the FI-NPI controller is implemented in C language in the servo-driven lower computer, and the speed closed-loop test of the BLDC motor is carried out. The experimental results show that the FI-NPI double-loop controller is better than the traditional double-PI controller in performance indicators such as convergence rate and RMSE, which confirms that the FI-NPI double-loop controller is more suitable for BLDC servo control. Full article
(This article belongs to the Special Issue State-of-the-Art Research in Systems and Control Engineering)
Show Figures

Figure 1

25 pages, 4974 KiB  
Article
Augmented Reality in Industry 4.0 Assistance and Training Areas: A Systematic Literature Review and Bibliometric Analysis
by Ginés Morales Méndez and Francisco del Cerro Velázquez
Electronics 2024, 13(6), 1147; https://doi.org/10.3390/electronics13061147 - 21 Mar 2024
Cited by 6 | Viewed by 3711
Abstract
Augmented reality (AR) technology is making a strong appearance on the industrial landscape, driven by significant advances in technological tools and developments. Its application in areas such as training and assistance has attracted the attention of the research community, which sees AR as [...] Read more.
Augmented reality (AR) technology is making a strong appearance on the industrial landscape, driven by significant advances in technological tools and developments. Its application in areas such as training and assistance has attracted the attention of the research community, which sees AR as an opportunity to provide operators with a more visual, immersive and interactive environment. This article deals with an analysis of the integration of AR in the context of the fourth industrial revolution, commonly referred to as Industry 4.0. Starting with a systematic review, 60 relevant studies were identified from the Scopus and Web of Science databases. These findings were used to build bibliometric networks, providing a broad perspective on AR applications in training and assistance in the context of Industry 4.0. The article presents the current landscape, existing challenges and future directions of AR research applied to industrial training and assistance based on a systematic literature review and citation network analysis. The findings highlight a growing trend in AR research, with a particular focus on addressing and overcoming the challenges associated with its implementation in complex industrial environments. Full article
(This article belongs to the Special Issue Perception and Interaction in Mixed, Augmented, and Virtual Reality)
Show Figures

Figure 1

17 pages, 5361 KiB  
Article
A Real-Time Spoofing Detection Method Using Three Low-Cost Antennas in Satellite Navigation
by Jiajia Chen, Xueying Wang, Zhibo Fang, Cheng Jiang, Ming Gao and Ying Xu
Electronics 2024, 13(6), 1134; https://doi.org/10.3390/electronics13061134 - 20 Mar 2024
Cited by 20 | Viewed by 1801
Abstract
The vulnerability of civil receivers of the Global Satellite Navigation System (GNSS) to spoofing jamming has raised significant concerns in recent times. Traditional multi-antenna spoofing detection methods are limited in application scenarios and come with high hardware costs. To address this issue, this [...] Read more.
The vulnerability of civil receivers of the Global Satellite Navigation System (GNSS) to spoofing jamming has raised significant concerns in recent times. Traditional multi-antenna spoofing detection methods are limited in application scenarios and come with high hardware costs. To address this issue, this paper proposes a novel GNSS spoofing detection method utilizing three low-cost collinear antennas. By leveraging the collinearity information of the antennas, this method effectively constrains the observation equation, leading to improved estimation accuracy of the pointing vector. Furthermore, by employing a binary statistical detection model based on the sum of squares (SSE) between the observed value and the estimated value of the pointing vector, real-time spoofing signal detection is enabled. Simulation results confirm the efficacy of the proposed statistical model, with the error of the skewness coefficient not exceeding 0.026. Experimental results further demonstrate that the collinear antenna-based method reduces the standard deviation of the angle deviation of the pointing vector by over 55.62% in the presence of spoofing signals. Moreover, the experiments indicate that with a 1 m baseline, this method achieves 100% spoofing detection. Full article
(This article belongs to the Special Issue Recent Advances in Autonomous Navigation)
Show Figures

Figure 1

17 pages, 6835 KiB  
Article
Grid Forming Technologies to Improve Rate of Change in Frequency and Frequency Nadir: Analysis-Based Replicated Load Shedding Events
by Oscar D. Garzon, Alexandre B. Nassif and Matin Rahmatian
Electronics 2024, 13(6), 1120; https://doi.org/10.3390/electronics13061120 - 19 Mar 2024
Cited by 2 | Viewed by 2303
Abstract
Electric power generation is quickly transitioning toward nontraditional inverter-based resources (IBRs). Prevalent devices today are solar PV, wind generators, and battery energy storage systems (BESS) based on electrochemical packs. These IBRs are interconnected throughout the power system via power electronics inverter bridges, which [...] Read more.
Electric power generation is quickly transitioning toward nontraditional inverter-based resources (IBRs). Prevalent devices today are solar PV, wind generators, and battery energy storage systems (BESS) based on electrochemical packs. These IBRs are interconnected throughout the power system via power electronics inverter bridges, which have sophisticated controls. This paper studies the impacts and benefits resulting from the integration of grid forming (GFM) inverters and energy storage on the stability of power systems via replicating real events of loss of generation units that resulted in large load shedding events. First, the authors tuned the power system dynamic model in Power System Simulator for Engineering (PSSE) to replicate the event records and, upon integrating the IBRs, analyzed the system dynamic responses of the BESS. This was conducted for both GFM and grid following (GFL) modes. Additionally, models for Grid Forming Static Synchronous Compensator (GFM STATCOM), were also created and simulated to allow for quantifying the benefits of this technology and a techno-economic analysis compared with GFM BESSs. The results presented in this paper demonstrate the need for industry standardization in the application of GFM inverters to unleash their benefits to the bulk electric grid. The results also demonstrate that the GFM STATCOM is a very capable system that can augment the bulk system inertia, effectively reducing the occurrence of load shedding events. Full article
Show Figures

Figure 1

28 pages, 5284 KiB  
Article
IoT-Based Intrusion Detection System Using New Hybrid Deep Learning Algorithm
by Sami Yaras and Murat Dener
Electronics 2024, 13(6), 1053; https://doi.org/10.3390/electronics13061053 - 12 Mar 2024
Cited by 41 | Viewed by 7011
Abstract
The most significant threat that networks established in IoT may encounter is cyber attacks. The most commonly encountered attacks among these threats are DDoS attacks. After attacks, the communication traffic of the network can be disrupted, and the energy of sensor nodes can [...] Read more.
The most significant threat that networks established in IoT may encounter is cyber attacks. The most commonly encountered attacks among these threats are DDoS attacks. After attacks, the communication traffic of the network can be disrupted, and the energy of sensor nodes can quickly deplete. Therefore, the detection of occurring attacks is of great importance. Considering numerous sensor nodes in the established network, analyzing the network traffic data through traditional methods can become impossible. Analyzing this network traffic in a big data environment is necessary. This study aims to analyze the obtained network traffic dataset in a big data environment and detect attacks in the network using a deep learning algorithm. This study is conducted using PySpark with Apache Spark in the Google Colaboratory (Colab) environment. Keras and Scikit-Learn libraries are utilized in the study. ‘CICIoT2023’ and ‘TON_IoT’ datasets are used for training and testing the model. The features in the datasets are reduced using the correlation method, ensuring the inclusion of significant features in the tests. A hybrid deep learning algorithm is designed using one-dimensional CNN and LSTM. The developed method was compared with ten machine learning and deep learning algorithms. The model’s performance was evaluated using accuracy, precision, recall, and F1 parameters. Following the study, an accuracy rate of 99.995% for binary classification and 99.96% for multiclassification is achieved in the ‘CICIoT2023’ dataset. In the ‘TON_IoT’ dataset, a binary classification success rate of 98.75% is reached. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

21 pages, 2543 KiB  
Article
Assessing the Reliability of Machine Learning Models Applied to the Mental Health Domain Using Explainable AI
by Vishnu Pendyala and Hyungkyun Kim
Electronics 2024, 13(6), 1025; https://doi.org/10.3390/electronics13061025 - 8 Mar 2024
Cited by 4 | Viewed by 3055
Abstract
Machine learning is increasingly and ubiquitously being used in the medical domain. Evaluation metrics like accuracy, precision, and recall may indicate the performance of the models but not necessarily the reliability of their outcomes. This paper assesses the effectiveness of a number of [...] Read more.
Machine learning is increasingly and ubiquitously being used in the medical domain. Evaluation metrics like accuracy, precision, and recall may indicate the performance of the models but not necessarily the reliability of their outcomes. This paper assesses the effectiveness of a number of machine learning algorithms applied to an important dataset in the medical domain, specifically, mental health, by employing explainability methodologies. Using multiple machine learning algorithms and model explainability techniques, this work provides insights into the models’ workings to help determine the reliability of the machine learning algorithm predictions. The results are not intuitive. It was found that the models were focusing significantly on less relevant features and, at times, unsound ranking of the features to make the predictions. This paper therefore argues that it is important for research in applied machine learning to provide insights into the explainability of models in addition to other performance metrics like accuracy. This is particularly important for applications in critical domains such as healthcare. Full article
(This article belongs to the Special Issue Machine Learning for Biomedical Applications)
Show Figures

Figure 1

21 pages, 3294 KiB  
Review
Optimizing Piezoelectric Energy Harvesting from Mechanical Vibration for Electrical Efficiency: A Comprehensive Review
by Demeke Girma Wakshume and Marek Łukasz Płaczek
Electronics 2024, 13(5), 987; https://doi.org/10.3390/electronics13050987 - 5 Mar 2024
Cited by 11 | Viewed by 9804
Abstract
In the current era, energy resources from the environment via piezoelectric materials are not only used for self-powered electronic devices, but also play a significant role in creating a pleasant living environment. Piezoelectric materials have the potential to produce energy from micro to [...] Read more.
In the current era, energy resources from the environment via piezoelectric materials are not only used for self-powered electronic devices, but also play a significant role in creating a pleasant living environment. Piezoelectric materials have the potential to produce energy from micro to milliwatts of power depending on the ambient conditions. The energy obtained from these materials is used for powering small electronic devices such as sensors, health monitoring devices, and various smart electronic gadgets like watches, personal computers, and cameras. These reviews explain the comprehensive concepts related to piezoelectric (classical and non-classical) materials, energy harvesting from the mechanical vibration of piezoelectric materials, structural modelling, and their optimization. Non-conventional smart materials, such as polyceramics, polymers, or composite piezoelectric materials, stand out due to their slender actuator and sensor profiles, offering superior performance, flexibility, and reliability at competitive costs despite their susceptibility to performance fluctuations caused by temperature variations. Accurate modeling and performance optimization, employing analytical, numerical, and experimental methodologies are imperative. This review also furthers research and development in optimizing piezoelectric energy utilization, suggesting the need for continued experimentation to select optimal materials and structures for various energy applications. Full article
(This article belongs to the Special Issue Energy Harvesting and Storage Technologies)
Show Figures

Figure 1

17 pages, 6522 KiB  
Article
Design of a Convolutional Neural Network Accelerator Based on On-Chip Data Reordering
by Yang Liu, Yiheng Zhang, Xiaoran Hao, Lan Chen, Mao Ni, Ming Chen and Rong Chen
Electronics 2024, 13(5), 975; https://doi.org/10.3390/electronics13050975 - 4 Mar 2024
Cited by 2 | Viewed by 2891
Abstract
Convolutional neural networks have been widely applied in the field of computer vision. In convolutional neural networks, convolution operations account for more than 90% of the total computational workload. The current mainstream approach to achieving high energy-efficient convolution operations is through dedicated hardware [...] Read more.
Convolutional neural networks have been widely applied in the field of computer vision. In convolutional neural networks, convolution operations account for more than 90% of the total computational workload. The current mainstream approach to achieving high energy-efficient convolution operations is through dedicated hardware accelerators. Convolution operations involve a significant amount of weights and input feature data. Due to limited on-chip cache space in accelerators, there is a significant amount of off-chip DRAM memory access involved in the computation process. The latency of DRAM access is 20 times higher than that of SRAM, and the energy consumption of DRAM access is 100 times higher than that of multiply–accumulate (MAC) units. It is evident that the “memory wall” and “power wall” issues in neural network computation remain challenging. This paper presents the design of a hardware accelerator for convolutional neural networks. It employs a dataflow optimization strategy based on on-chip data reordering. This strategy improves on-chip data utilization and reduces the frequency of data exchanges between on-chip cache and off-chip DRAM. The experimental results indicate that compared to the accelerator without this strategy, it can reduce data exchange frequency by up to 82.9%. Full article
(This article belongs to the Special Issue Artificial Intelligence and Signal Processing: Circuits and Systems)
Show Figures

Figure 1

18 pages, 1111 KiB  
Article
Control Performance Requirements for Automated Driving Systems
by Trevor Vidano and Francis Assadian
Electronics 2024, 13(5), 902; https://doi.org/10.3390/electronics13050902 - 27 Feb 2024
Cited by 4 | Viewed by 1789
Abstract
This research investigates the development of risk-based performance requirements for the control of an automated driving system (ADS). The proposed method begins by determining the target level of safety for the virtual driver of an ADS. The underlying assumptions are informed by existing [...] Read more.
This research investigates the development of risk-based performance requirements for the control of an automated driving system (ADS). The proposed method begins by determining the target level of safety for the virtual driver of an ADS. The underlying assumptions are informed by existing data. Next, geometric models of the road and vehicle are used to derive deterministic performance levels of the virtual driver. To integrate the risk and performance requirements seamlessly, we propose new definitions for errors associated with the planner, pose, and control modules. These definitions facilitate the derivation of stochastic performance requirements for each module, thus ensuring an overall target level of safety. Notably, these definitions enable real-time controller performance monitoring, thus potentially enabling fault detection linked to the system’s overall safety target. At a high level, this approach argues that the requirements for the virtual driver’s modules should be designed simultaneously. To illustrate this approach, this technique is applied to a research project available in the literature that developed an automated steering system for an articulated bus. This example shows that the method generates achievable performance requirements that are verifiable through experimental testing and highlights the importance in validating the underlying assumptions for effective risk management. Full article
(This article belongs to the Special Issue Autonomous and Connected Vehicles)
Show Figures

Figure 1

14 pages, 1300 KiB  
Article
Hybrid FSO/RF Communications in Space–Air–Ground Integrated Networks: A Reduced Overhead Link Selection Policy
by Petros S. Bithas, Hector E. Nistazakis, Athanassios Katsis and Liang Yang
Electronics 2024, 13(4), 806; https://doi.org/10.3390/electronics13040806 - 19 Feb 2024
Cited by 6 | Viewed by 2154
Abstract
Space–air–ground integrated network (SAGIN) is considered an enabler for sixth-generation (6G) networks. By integrating terrestrial and non-terrestrial (satellite, aerial) networks, SAGIN seems to be a quite promising solution to provide reliable connectivity everywhere and all the time. Its availability can be further enhanced [...] Read more.
Space–air–ground integrated network (SAGIN) is considered an enabler for sixth-generation (6G) networks. By integrating terrestrial and non-terrestrial (satellite, aerial) networks, SAGIN seems to be a quite promising solution to provide reliable connectivity everywhere and all the time. Its availability can be further enhanced if hybrid free space optical (FSO)/radio frequency (RF) links are adopted. In this paper, the performance of a hybrid FSO/RF communication system operating in SAGIN has been analytically evaluated. In the considered system, a high-altitude platform station (HAPS) is used to forward the satellite signal to the ground station. Moreover, the FSO channel model assumed takes into account the turbulence, pointing errors, and path losses, while for the RF links, a relatively new composite fading model has been considered. In this context, a new link selection scheme has been proposed that is designed to reduced the signaling overhead required for the switching operations between the RF and FSO links. The analytical framework that has been developed is based on the Markov chain theory. Capitalizing on this framework, the performance of the system has been investigated using the criteria of outage probability and the average number of link estimations. The numerical results presented reveal that the new selection scheme offers a good compromise between performance and complexity. Full article
Show Figures

Figure 1

26 pages, 761 KiB  
Article
A Hybrid Group Multi-Criteria Approach Based on SAW, TOPSIS, VIKOR, and COPRAS Methods for Complex IoT Selection Problems
by Constanta Zoie Radulescu and Marius Radulescu
Electronics 2024, 13(4), 789; https://doi.org/10.3390/electronics13040789 - 17 Feb 2024
Cited by 12 | Viewed by 2083
Abstract
The growth of Internet of Things (IoT) systems is driven by their potential to improve efficiency, enhance decision-making, and create new business opportunities across various domains. In this paper, the main selection problems in IoT-type systems, criteria used in multi-criteria evaluation, and multi-criteria [...] Read more.
The growth of Internet of Things (IoT) systems is driven by their potential to improve efficiency, enhance decision-making, and create new business opportunities across various domains. In this paper, the main selection problems in IoT-type systems, criteria used in multi-criteria evaluation, and multi-criteria methods used for solving IoT selection problems are identified. Then, a Hybrid Group Multi-Criteria Approach for solving selection problems in IoT-type systems is proposed. The approach contains the Best Worst Method (BWM) weighting method, multi-criteria Simple Additive Weighting (SAW), Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS), VIseKriterijumska Optimizacija I Kompromisno Resenje (VIKOR), and Complex Proportional Assessment Method (COPRAS), and a method that combines the solutions obtained using the four considered multi-criteria methods to obtain a single solution. The SAW, TOPSIS, VIKOR, and COPRAS methods were analyzed in relation to their advantages, disadvantages, inputs, outputs, measurement scale, type of normalization, aggregation method, parameters, complexity of implementation, and interactivity. An application of the Hybrid Group Multi-Criteria Approach for IoT platform selection and a comparison between the SAW, TOPSIS, VIKOR, and COPRAS solutions and the solution of the proposed approach is realized. A Spearman correlation analysis is presented. Full article
(This article belongs to the Special Issue Advances in Decision Making for Complex Systems)
Show Figures

Figure 1

34 pages, 3253 KiB  
Review
Review of Industry 4.0 from the Perspective of Automation and Supervision Systems: Definitions, Architectures and Recent Trends
by Francisco Javier Folgado, David Calderón, Isaías González and Antonio José Calderón
Electronics 2024, 13(4), 782; https://doi.org/10.3390/electronics13040782 - 16 Feb 2024
Cited by 62 | Viewed by 10404
Abstract
Industry 4.0 is a new paradigm that is transforming the industrial scenario. It has generated a large amount of scientific studies, commercial equipment and, above all, high expectations. Nevertheless, there is no single definition or general agreement on its implications, specifically in the [...] Read more.
Industry 4.0 is a new paradigm that is transforming the industrial scenario. It has generated a large amount of scientific studies, commercial equipment and, above all, high expectations. Nevertheless, there is no single definition or general agreement on its implications, specifically in the field of automation and supervision systems. In this paper, a review of the Industry 4.0 concept, with equivalent terms, enabling technologies and reference architectures for its implementation, is presented. It will be shown that this paradigm results from the confluence and integration of both existing and disruptive technologies. Furthermore, the most relevant trends in industrial automation and supervision systems are covered, highlighting the convergence of traditional equipment and those characterized by the Internet of Things (IoT). This paper is intended to serve as a reference document as well as a guide for the design and deployment of automation and supervision systems framed in Industry 4.0. Full article
(This article belongs to the Section Industrial Electronics)
Show Figures

Figure 1

30 pages, 17457 KiB  
Article
Melanoma Skin Cancer Identification with Explainability Utilizing Mask Guided Technique
by Lahiru Gamage, Uditha Isuranga, Dulani Meedeniya, Senuri De Silva and Pratheepan Yogarajah
Electronics 2024, 13(4), 680; https://doi.org/10.3390/electronics13040680 - 6 Feb 2024
Cited by 19 | Viewed by 3497
Abstract
Melanoma is a highly prevalent and lethal form of skin cancer, which has a significant impact globally. The chances of recovery for melanoma patients substantially improve with early detection. Currently, deep learning (DL) methods are gaining popularity in assisting with the identification of [...] Read more.
Melanoma is a highly prevalent and lethal form of skin cancer, which has a significant impact globally. The chances of recovery for melanoma patients substantially improve with early detection. Currently, deep learning (DL) methods are gaining popularity in assisting with the identification of diseases using medical imaging. The paper introduces a computational model for classifying melanoma skin cancer images using convolutional neural networks (CNNs) and vision transformers (ViT) with the HAM10000 dataset. Both approaches utilize mask-guided techniques, employing a specialized U2-Net segmentation module to generate masks. The CNN-based approach utilizes ResNet50, VGG16, and Xception with transfer learning. The training process is enhanced using a Bayesian hyperparameter tuner. Moreover, this study applies gradient-weighted class activation mapping (Grad-CAM) and Grad-CAM++ to generate heatmaps to explain the classification models. These visual heatmaps elucidate the contribution of each input region to the classification outcome. The CNN-based model approach achieved the highest accuracy at 98.37% in the Xception model with a sensitivity and specificity of 95.92% and 99.01%, respectively. The ViT-based model approach achieved high values for accuracy, sensitivity, and specificity, such as 92.79%, 91.09%, and 93.54%, respectively. Furthermore, the performance of the model was assessed through intersection over union (IOU) and other qualitative evaluations. Finally, we developed the proposed model as a web application that can be used as a support tool for medical practitioners in real-time. The system usability study score of 86.87% is reported, which shows the usefulness of the proposed solution. Full article
Show Figures

Figure 1

55 pages, 1876 KiB  
Review
A Survey on Video Streaming for Next-Generation Vehicular Networks
by Chenn-Jung Huang, Hao-Wen Cheng, Yi-Hung Lien and Mei-En Jian
Electronics 2024, 13(3), 649; https://doi.org/10.3390/electronics13030649 - 4 Feb 2024
Cited by 10 | Viewed by 3912
Abstract
As assisted driving technology advances and vehicle entertainment systems rapidly develop, future vehicles will become mobile cinemas, where passengers can use various multimedia applications in the car. In recent years, the progress in multimedia technology has given rise to immersive video experiences. In [...] Read more.
As assisted driving technology advances and vehicle entertainment systems rapidly develop, future vehicles will become mobile cinemas, where passengers can use various multimedia applications in the car. In recent years, the progress in multimedia technology has given rise to immersive video experiences. In addition to conventional 2D videos, 360° videos are gaining popularity, and volumetric videos, which can offer users a better immersive experience, have been discussed. However, these applications place high demands on network capabilities, leading to a dependence on next-generation wireless communication technology to address network bottlenecks. Therefore, this study provides an exhaustive overview of the latest advancements in video streaming over vehicular networks. First, we introduce related work and background knowledge, and provide an overview of recent developments in vehicular networking and video types. Next, we detail various video processing technologies, including the latest released standards. Detailed explanations are provided for network strategies and wireless communication technologies that can optimize video transmission in vehicular networks, paying special attention to the relevant literature regarding the current development of 6G technology that is applied to vehicle communication. Finally, we proposed future research directions and challenges. Building upon the technologies introduced in this paper and considering diverse applications, we suggest a suitable vehicular network architecture for next-generation video transmission. Full article
(This article belongs to the Special Issue Featured Review Papers in Electrical and Autonomous Vehicles)
Show Figures

Figure 1

26 pages, 352 KiB  
Review
Combining Machine Learning and Edge Computing: Opportunities, Challenges, Platforms, Frameworks, and Use Cases
by Piotr Grzesik and Dariusz Mrozek
Electronics 2024, 13(3), 640; https://doi.org/10.3390/electronics13030640 - 3 Feb 2024
Cited by 21 | Viewed by 9071
Abstract
In recent years, we have been observing the rapid growth and adoption of IoT-based systems, enhancing multiple areas of our lives. Concurrently, the utilization of machine learning techniques has surged, often for similar use cases as those seen in IoT systems. In this [...] Read more.
In recent years, we have been observing the rapid growth and adoption of IoT-based systems, enhancing multiple areas of our lives. Concurrently, the utilization of machine learning techniques has surged, often for similar use cases as those seen in IoT systems. In this survey, we aim to focus on the combination of machine learning and the edge computing paradigm. The presented research commences with the topic of edge computing, its benefits, such as reduced data transmission, improved scalability, and reduced latency, as well as the challenges associated with this computing paradigm, like energy consumption, constrained devices, security, and device fleet management. It then presents the motivations behind the combination of machine learning and edge computing, such as the availability of more powerful edge devices, improving data privacy, reducing latency, or lowering reliance on centralized services. Then, it describes several edge computing platforms, with a focus on their capability to enable edge intelligence workflows. It also reviews the currently available edge intelligence frameworks and libraries, such as TensorFlow Lite or PyTorch Mobile. Afterward, the paper focuses on the existing use cases for edge intelligence in areas like industrial applications, healthcare applications, smart cities, environmental monitoring, or autonomous vehicles. Full article
(This article belongs to the Special Issue Towards Efficient and Reliable AI at the Edge)
Show Figures

Figure 1

17 pages, 2087 KiB  
Article
Multi-Channel Graph Convolutional Networks for Graphs with Inconsistent Structures and Features
by Xinglong Chang, Jianrong Wang, Rui Wang, Tao Wang, Yingkui Wang and Weihao Li
Electronics 2024, 13(3), 607; https://doi.org/10.3390/electronics13030607 - 1 Feb 2024
Viewed by 2165
Abstract
Graph convolutional networks (GCNs) have attracted increasing attention in various fields due to their significant capacity to process graph-structured data. Typically, the GCN model and its variants heavily rely on the transmission of node features across the graph structure, which implicitly assumes that [...] Read more.
Graph convolutional networks (GCNs) have attracted increasing attention in various fields due to their significant capacity to process graph-structured data. Typically, the GCN model and its variants heavily rely on the transmission of node features across the graph structure, which implicitly assumes that the graph structure and node features are consistent, i.e., they carry related information. However, in many real-world networks, node features may unexpectedly mismatch with the structural information. Existing GCNs fail to generalize to inconsistent scenarios and are even outperformed by models that ignore the graph structure or node features. To address this problem, we investigate how to extract representations from both the graph structure and node features. Consequently, we propose the multi-channel graph convolutional network (MCGCN) for graphs with inconsistent structures and features. Specifically, the MCGCN encodes the graph structure and node features using two specific convolution channels to extract two separate specific representations. Additionally, two joint convolution channels are constructed to extract the common information shared by the graph structure and node features. Finally, an attention mechanism is utilized to adaptively learn the importance weights of these channels under the guidance of the node classification task. In this way, our model can handle both consistent and inconsistent scenarios. Extensive experiments on both synthetic and real-world datasets for node classification and recommendation tasks show that our methods, MCGCN-A and MCGCN-I, achieve the best performance on seven out of eight datasets and the second-best performance on the remaining dataset. For simpler graph structures or tasks where the overhead of multiple convolution channels is not justified, traditional single-channel GCN models might be more efficient. Full article
Show Figures

Figure 1

15 pages, 4767 KiB  
Article
FDA-Approved Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices: An Updated Landscape
by Geeta Joshi, Aditi Jain, Shalini Reddy Araveeti, Sabina Adhikari, Harshit Garg and Mukund Bhandari
Electronics 2024, 13(3), 498; https://doi.org/10.3390/electronics13030498 - 24 Jan 2024
Cited by 82 | Viewed by 29224
Abstract
As artificial intelligence (AI) has been highly advancing in the last decade, machine learning (ML)-enabled medical devices are increasingly used in healthcare. In this study, we collected publicly available information on AI/ML-enabled medical devices approved by the FDA in the United States, as [...] Read more.
As artificial intelligence (AI) has been highly advancing in the last decade, machine learning (ML)-enabled medical devices are increasingly used in healthcare. In this study, we collected publicly available information on AI/ML-enabled medical devices approved by the FDA in the United States, as of the latest update on 19 October 2023. We performed comprehensive analysis of a total of 691 FDA-approved artificial intelligence and machine learning (AI/ML)-enabled medical devices and offer an in-depth analysis of clearance pathways, approval timeline, regulation type, medical specialty, decision type, recall history, etc. We found a significant surge in approvals since 2018, with clear dominance of the radiology specialty in the application of machine learning tools, attributed to the abundant data from routine clinical data. The study also reveals a reliance on the 510(k)-clearance pathway, emphasizing its basis on substantial equivalence and often bypassing the need for new clinical trials. Also, it notes an underrepresentation of pediatric-focused devices and trials, suggesting an opportunity for expansion in this demographic. Moreover, the geographical limitation of clinical trials, primarily within the United States, points to a need for more globally inclusive trials to encompass diverse patient demographics. This analysis not only maps the current landscape of AI/ML-enabled medical devices but also pinpoints trends, potential gaps, and areas for future exploration, clinical trial practices, and regulatory approaches. In conclusion, our analysis sheds light on the current state of FDA-approved AI/ML-enabled medical devices and prevailing trends, contributing to a wider comprehension. Full article
Show Figures

Figure 1

21 pages, 7312 KiB  
Article
Cyber-Resilient Converter Control System for Doubly Fed Induction Generator-Based Wind Turbine Generators
by Nathan Farrar and Mohd. Hasan Ali
Electronics 2024, 13(3), 492; https://doi.org/10.3390/electronics13030492 - 24 Jan 2024
Cited by 3 | Viewed by 1957
Abstract
As wind turbine generator systems become more common in the modern power grid, the question of how to adequately protect them from cyber criminals has become a major theme in the development of new control systems. As such, artificial intelligence (AI) and machine [...] Read more.
As wind turbine generator systems become more common in the modern power grid, the question of how to adequately protect them from cyber criminals has become a major theme in the development of new control systems. As such, artificial intelligence (AI) and machine learning (ML) algorithms have become major contributors to preventing, detecting, and mitigating cyber-attacks in the power system. In their current state, wind turbine generator systems are woefully unprepared for a coordinated and sophisticated cyber attack. With the implementation of the internet-of-things (IoT) devices in the power control network, cyber risks have increased exponentially. The literature shows the impact analysis and exploring detection techniques for cyber attacks on the wind turbine generator systems; however, almost no work on the mitigation of the adverse effects of cyber attacks on the wind turbine control systems has been reported. To overcome these limitations, this paper proposes implementing an AI-based converter controller, i.e., a multi-agent deep deterministic policy gradient (DDPG) method that can mitigate any adverse effects that communication delays or bad data could have on a grid-connected doubly fed induction generator (DFIG)-based wind turbine generator or wind farm. The performance of the proposed DDPG controller has been compared with that of a variable proportional–integral (VPI) control-based mitigation method. The proposed technique has been simulated and validated utilizing the MATLAB/Simulink software, version R2023A, to demonstrate the effectiveness of the proposed method. Also, the performance of the proposed DDPG method is better than that of the VPI method in mitigating the adverse impacts of cyber attacks on wind generator systems, which is validated by the plots and the root mean square error table found in the results section. Full article
(This article belongs to the Special Issue Advances in Renewable Energy and Electricity Generation)
Show Figures

Figure 1

17 pages, 6140 KiB  
Article
Predictive Maintenance of Machinery with Rotating Parts Using Convolutional Neural Networks
by Stamatis Apeiranthitis, Paraskevi Zacharia, Avraam Chatzopoulos and Michail Papoutsidakis
Electronics 2024, 13(2), 460; https://doi.org/10.3390/electronics13020460 - 22 Jan 2024
Cited by 7 | Viewed by 4153
Abstract
All kinds of vessels consist of dozens of complex machineries with rotating parts and electric motors that operate continuously in harsh environments with excess temperature, humidity, vibration, fatigue, and load. A breakdown or malfunction in one of these machineries can significantly impact a [...] Read more.
All kinds of vessels consist of dozens of complex machineries with rotating parts and electric motors that operate continuously in harsh environments with excess temperature, humidity, vibration, fatigue, and load. A breakdown or malfunction in one of these machineries can significantly impact a vessel’s operation and safety and, consequently, the safety of the crew and the environment. To maintain operational efficiency and seaworthiness, the shipping industry invests substantial resources in preventive maintenance and repairs. This study presents the economic and technical benefits of predictive maintenance over traditional preventive maintenance and repair by replacement approaches in the maritime domain. By leveraging modern technology and artificial intelligence, we can analyze the operating conditions of machinery by obtaining measurements either from sensors permanently installed on the machinery or by utilizing portable measuring instruments. This facilitates the early identification of potential damage, thereby enabling efficient strategizing for future maintenance and repair endeavors. In this paper, we propose and develop a convolutional neural network that is fed with raw vibration measurements acquired in a laboratory environment from the ball bearings of a motor. Then, we investigate whether the proposed network can accurately detect the functional state of ball bearings and categorize any possible failures present, contributing to improved maintenance practices in the shipping industry. Full article
(This article belongs to the Special Issue Intelligent Manufacturing Systems and Applications in Industry 4.0)
Show Figures

Figure 1

20 pages, 10060 KiB  
Article
Comparative Analysis of Machine Learning Models for Predictive Maintenance of Ball Bearing Systems
by Umer Farooq, Moses Ademola and Abdu Shaalan
Electronics 2024, 13(2), 438; https://doi.org/10.3390/electronics13020438 - 21 Jan 2024
Cited by 13 | Viewed by 3816
Abstract
In the era of Industry 4.0 and beyond, ball bearings remain an important part of industrial systems. The failure of ball bearings can lead to plant downtime, inefficient operations, and significant maintenance expenses. Although conventional preventive maintenance mechanisms like time-based maintenance, routine inspections, [...] Read more.
In the era of Industry 4.0 and beyond, ball bearings remain an important part of industrial systems. The failure of ball bearings can lead to plant downtime, inefficient operations, and significant maintenance expenses. Although conventional preventive maintenance mechanisms like time-based maintenance, routine inspections, and manual data analysis provide a certain level of fault prevention, they are often reactive, time-consuming, and imprecise. On the other hand, machine learning algorithms can detect anomalies early, process vast amounts of data, continuously improve in almost real time, and, in turn, significantly enhance the efficiency of modern industrial systems. In this work, we compare different machine learning and deep learning techniques to optimise the predictive maintenance of ball bearing systems, which, in turn, will reduce the downtime and improve the efficiency of current and future industrial systems. For this purpose, we evaluate and compare classification algorithms like Logistic Regression and Support Vector Machine, as well as ensemble algorithms like Random Forest and Extreme Gradient Boost. We also explore and evaluate long short-term memory, which is a type of recurrent neural network. We assess and compare these models in terms of their accuracy, precision, recall, F1 scores, and computation requirement. Our comparison results indicate that Extreme Gradient Boost gives the best trade-off in terms of overall performance and computation time. For a dataset of 2155 vibration signals, Extreme Gradient Boost gives an accuracy of 96.61% while requiring a training time of only 0.76 s. Moreover, among the techniques that give an accuracy greater than 80%, Extreme Gradient Boost also gives the best accuracy-to-computation-time ratio. Full article
(This article belongs to the Section Systems & Control Engineering)
Show Figures

Figure 1

14 pages, 8860 KiB  
Article
An Effective Spherical NF/FF Transformation Suitable for Characterising an Antenna under Test in Presence of an Infinite Perfectly Conducting Ground Plane
by Flaminio Ferrara, Claudio Gennarelli, Rocco Guerriero and Giovanni Riccio
Electronics 2024, 13(2), 397; https://doi.org/10.3390/electronics13020397 - 18 Jan 2024
Viewed by 1210
Abstract
An effective near-field to far-field transformation using a reduced number of near-field measurements collected via a spherical scan over the upper hemisphere, due to the presence of a flat metallic ground, is devised in this paper. Such a transformation relies on the non-redundant [...] Read more.
An effective near-field to far-field transformation using a reduced number of near-field measurements collected via a spherical scan over the upper hemisphere, due to the presence of a flat metallic ground, is devised in this paper. Such a transformation relies on the non-redundant sampling representations of electromagnetic fields and exploits the image principle to properly account for the metallic ground, supposed to be of infinite extent and realised by perfectly conducting material. The sampling representation of the probe voltage over the upper hemisphere is developed by modelling the antenna under test and its image by a very adaptable convex surface, which is able to fit as much as possible the geometry of any kind of antenna, thus minimising the volumetric redundancy and, accordingly, the number of required samples as well as the measurement time. Then, the use of a two-dimensional optimal sampling interpolation algorithm allows the reconstruction of the voltage value at each sampling point of the spherical grid required by the classical near-field-to-far-field transformation developed by Hansen. Numerical examples proving the effectiveness of the developed sampling representation and related near-field-to-far-field transformation techniques are reported. Full article
(This article belongs to the Special Issue Feature Papers in Microwave and Wireless Communications Section)
Show Figures

Figure 1

18 pages, 2413 KiB  
Article
A Federated Learning-Based Resource Allocation Scheme for Relaying-Assisted Communications in Multicellular Next Generation Network Topologies
by Ioannis A. Bartsiokas, Panagiotis K. Gkonis, Dimitra I. Kaklamani and Iakovos S. Venieris
Electronics 2024, 13(2), 390; https://doi.org/10.3390/electronics13020390 - 17 Jan 2024
Cited by 5 | Viewed by 1802
Abstract
Growing and diverse user needs, along with the need for continuous access with minimal delay in densely populated machine-type networks, have led to a significant overhaul of modern mobile communication systems. Within this realm, the integration of advanced physical layer techniques such as [...] Read more.
Growing and diverse user needs, along with the need for continuous access with minimal delay in densely populated machine-type networks, have led to a significant overhaul of modern mobile communication systems. Within this realm, the integration of advanced physical layer techniques such as relaying-assisted transmission in beyond fifth-generation (B5G) networks aims to not only enhance network performance but also extend coverage across multicellular orientations. However, in cellular environments, the increased interference levels and the complex channel representations introduce a notable rise in the computational complexity associated with radio resource management (RRM) tasks. Machine and deep learning (ML/DL) have been proposed as an efficient way to support the enhanced user demands in densely populated environments since ML/DL models can relax the traffic load that is associated with RRM tasks. There is, however, in these solutions the need for distributed execution of training tasks to accelerate the decision-making process in RRM tasks. For this purpose, federated learning (FL) schemes are considered a promising field of research for next-generation (NG) networks’ RRM. This paper proposes an FL approach to tackle the joint relay node (RN) selection and resource allocation problem subject to power management constraints when in B5G networks. The optimization objective of this approach is to jointly elevate energy (EE) and spectral efficiency (SE) levels. The performance of the proposed approach is evaluated for various relaying-assisted transmission topologies and through comparison with other state-of-the-art ones (both ML and non-ML). In particular, the total system energy efficiency (EE) and spectral efficiency (SE) can be improved by up to approximately 10–20% compared to a state-of-the-art centralized ML scheme. Moreover, achieved accuracy can be improved by up to 10% compared to state-of-the-art non-ML solutions, while training time is reduced by approximately 50%. Full article
(This article belongs to the Special Issue Feature Papers in Microwave and Wireless Communications Section)
Show Figures

Figure 1

15 pages, 3885 KiB  
Article
A Study on Machine Learning-Enhanced Roadside Unit-Based Detection of Abnormal Driving in Autonomous Vehicles
by Keon Yun, Heesun Yun, Sangmin Lee, Jinhyeok Oh, Minchul Kim, Myongcheol Lim, Juntaek Lee, Chanmin Kim, Jiwon Seo and Jinyoung Choi
Electronics 2024, 13(2), 288; https://doi.org/10.3390/electronics13020288 - 8 Jan 2024
Cited by 8 | Viewed by 2733
Abstract
Ensuring the safety of autonomous vehicles is becoming increasingly important with ongoing technological advancements. In this paper, we suggest a machine learning-based approach for detecting and responding to various abnormal behaviors within the V2X system, a system that mirrors real-world road conditions. Our [...] Read more.
Ensuring the safety of autonomous vehicles is becoming increasingly important with ongoing technological advancements. In this paper, we suggest a machine learning-based approach for detecting and responding to various abnormal behaviors within the V2X system, a system that mirrors real-world road conditions. Our system, including the RSU, is designed to identify vehicles exhibiting abnormal driving. Abnormal driving can arise from various causes, such as communication delays, sensor errors, navigation system malfunctions, environmental challenges, and cybersecurity threats. We simulated exploring three primary scenarios of abnormal driving: sensor errors, overlapping vehicles, and counterflow driving. The applicability of machine learning algorithms for detecting these anomalies was evaluated. The Minisom algorithm, in particular, demonstrated high accuracy, recall, and precision in identifying sensor errors, vehicle overlaps, and counterflow situations. Notably, changes in the vehicle’s direction and its characteristics proved to be significant indicators in the Basic Safety Messages (BSM). We propose adding a new element called linePosition to BSM Part 2, enhancing our ability to promptly detect and address vehicle abnormalities. This addition underpins the technical capabilities of RSU systems equipped with edge computing, enabling real-time analysis of vehicle data and appropriate responsive measures. In this paper, we emphasize the effectiveness of machine learning in identifying and responding to the abnormal behavior of autonomous vehicles, offering new ways to enhance vehicle safety and facilitate smoother road traffic flow. Full article
(This article belongs to the Section Electrical and Autonomous Vehicles)
Show Figures

Figure 1

18 pages, 2334 KiB  
Article
How to Design and Evaluate mHealth Apps? A Case Study of a Mobile Personal Health Record App
by Guyeop Kim, Dongwook Hwang, Jaehyun Park, Hyun K. Kim and Eui-Seok Hwang
Electronics 2024, 13(1), 213; https://doi.org/10.3390/electronics13010213 - 3 Jan 2024
Cited by 6 | Viewed by 3992
Abstract
The rapid growth of the mHealth market has led to the development of several tools to evaluate user experience. However, there is a lack of universal tools specifically designed for this emerging technology. This study was conducted with the aim of developing and [...] Read more.
The rapid growth of the mHealth market has led to the development of several tools to evaluate user experience. However, there is a lack of universal tools specifically designed for this emerging technology. This study was conducted with the aim of developing and verifying a user experience evaluation scale for mHealth apps based on factors proposed in previous research. The initial draft of the tool was created following a comprehensive review of existing questionnaires related to mHealth app evaluation. The validity of this scale was then tested through exploratory and confirmatory factor analysis. The results of the factor analysis led to the derivation of 16 items, which were conceptually mapped to five factors: ease of use and satisfaction, information architecture, usefulness, ease of information, and aesthetics. A case study was also conducted to improve mHealth apps concerning personal health records using this scale. In conclusion, the developed user experience evaluation scale for mHealth apps can provide comprehensive user feedback and contribute to the improvement of these apps. Full article
(This article belongs to the Special Issue Human-Computer Interactions in E-health)
Show Figures

Figure 1

27 pages, 5536 KiB  
Article
Multi-Modal Contrastive Learning for LiDAR Point Cloud Rail-Obstacle Detection in Complex Weather
by Lu Wen, Yongliang Peng, Miao Lin, Nan Gan and Rongqing Tan
Electronics 2024, 13(1), 220; https://doi.org/10.3390/electronics13010220 - 3 Jan 2024
Cited by 12 | Viewed by 3131
Abstract
Obstacle intrusion is a serious threat to the safety of railway traffic. LiDAR point cloud 3D semantic segmentation (3DSS) provides a new method for unmanned rail-obstacle detection. However, the inevitable degradation of model performance occurs in complex weather and hinders its practical application. [...] Read more.
Obstacle intrusion is a serious threat to the safety of railway traffic. LiDAR point cloud 3D semantic segmentation (3DSS) provides a new method for unmanned rail-obstacle detection. However, the inevitable degradation of model performance occurs in complex weather and hinders its practical application. In this paper, a multi-modal contrastive learning (CL) strategy, named DHT-CL, is proposed to improve point cloud 3DSS in complex weather for rail-obstacle detection. DHT-CL is a camera and LiDAR sensor fusion strategy specifically designed for complex weather and obstacle detection tasks, without the need for image input during the inference stage. We first demonstrate how the sensor fusion method is more robust under rainy and snowy conditions, and then we design a Dual-Helix Transformer (DHT) to extract deeper cross-modal information through a neighborhood attention mechanism. Then, an obstacle anomaly-aware cross-modal discrimination loss is constructed for collaborative optimization that adapts to the anomaly identification task. Experimental results on a complex weather railway dataset show that with an mIoU of 87.38%, the proposed DHT-CL strategy achieves better performance compared to other high-performance models from the autonomous driving dataset, SemanticKITTI. The qualitative results show that DHT-CL achieves higher accuracy in clear weather and reduces false alarms in rainy and snowy weather. Full article
(This article belongs to the Special Issue Advanced Technologies in Intelligent Transportation Systems)
Show Figures

Figure 1

Back to TopTop