Editor’s Choice Articles

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 1888 KiB  
Article
Bibliometric Analysis of Automated Assessment in Programming Education: A Deeper Insight into Feedback
by José Carlos Paiva, Álvaro Figueira and José Paulo Leal
Electronics 2023, 12(10), 2254; https://doi.org/10.3390/electronics12102254 - 15 May 2023
Cited by 4 | Viewed by 3020
Abstract
Learning to program requires diligent practice and creates room for discovery, trial and error, debugging, and concept mapping. Learners must walk this long road themselves, supported by appropriate and timely feedback. Providing such feedback in programming exercises is not a humanly feasible task. [...] Read more.
Learning to program requires diligent practice and creates room for discovery, trial and error, debugging, and concept mapping. Learners must walk this long road themselves, supported by appropriate and timely feedback. Providing such feedback in programming exercises is not a humanly feasible task. Therefore, the early and steadily growing interest of computer science educators in the automated assessment of programming exercises is not surprising. The automated assessment of programming assignments has been an active area of research for over a century, and interest in it continues to grow as it adapts to new developments in computer science and the resulting changes in educational requirements. It is therefore of paramount importance to understand the work that has been performed, who has performed it, its evolution over time, the relationships between publications, its hot topics, and open problems, among others. This paper presents a bibliometric study of the field, with a particular focus on the issue of automatic feedback generation, using literature data from the Web of Science Core Collection. It includes a descriptive analysis using various bibliometric measures and data visualizations on authors, affiliations, citations, and topics. In addition, we performed a complementary analysis focusing only on the subset of publications on the specific topic of automatic feedback generation. The results are highlighted and discussed. Full article
Show Figures

Figure 1

14 pages, 595 KiB  
Article
FASS: Face Anti-Spoofing System Using Image Quality Features and Deep Learning
by Enoch Solomon and Krzysztof J. Cios
Electronics 2023, 12(10), 2199; https://doi.org/10.3390/electronics12102199 - 12 May 2023
Cited by 23 | Viewed by 6398
Abstract
Face recognition technology has been widely used due to the convenience it provides. However, face recognition is vulnerable to spoofing attacks which limits its usage in sensitive application areas. This work introduces a novel face anti-spoofing system, FASS, that fuses results of two [...] Read more.
Face recognition technology has been widely used due to the convenience it provides. However, face recognition is vulnerable to spoofing attacks which limits its usage in sensitive application areas. This work introduces a novel face anti-spoofing system, FASS, that fuses results of two classifiers. One, random forest, uses the identified by us seven no-reference image quality features derived from face images and its results are fused with a deep learning classifier results that uses entire face images as input. Extensive experiments were performed to compare FASS with state-of-the-art anti-spoofing systems on five benchmark datasets: Replay-Attack, CASIA-MFSD, MSU-MFSD, OULU-NPU and SiW. The results show that FASS outperforms all face anti-spoofing systems based on image quality features and is also more accurate than many of the state-of-the-art systems based on deep learning. Full article
(This article belongs to the Special Issue Modern Computer Vision and Image Analysis)
Show Figures

Figure 1

20 pages, 1319 KiB  
Article
Deep-Learning-Driven Techniques for Real-Time Multimodal Health and Physical Data Synthesis
by Muhammad Salman Haleem, Audrey Ekuban, Alessio Antonini, Silvio Pagliara, Leandro Pecchia and Carlo Allocca
Electronics 2023, 12(9), 1989; https://doi.org/10.3390/electronics12091989 - 25 Apr 2023
Cited by 10 | Viewed by 3831
Abstract
With the advent of Artificial Intelligence for healthcare, data synthesis methods present crucial benefits in facilitating the fast development of AI models while protecting data subjects and bypassing the need to engage with the complexity of data sharing and processing agreements. Existing technologies [...] Read more.
With the advent of Artificial Intelligence for healthcare, data synthesis methods present crucial benefits in facilitating the fast development of AI models while protecting data subjects and bypassing the need to engage with the complexity of data sharing and processing agreements. Existing technologies focus on synthesising real-time physiological and physical records based on regular time intervals. Real health data are, however, characterised by irregularities and multimodal variables that are still hard to reproduce, preserving the correlation across time and different dimensions. This paper presents two novel techniques for synthetic data generation of real-time multimodal electronic health and physical records, (a) the Temporally Correlated Multimodal Generative Adversarial Network and (b) the Document Sequence Generator. The paper illustrates the need and use of these techniques through a real use case, the H2020 GATEKEEPER project of AI for healthcare. Furthermore, the paper presents the evaluation for both individual cases and a discussion about the comparability between techniques and their potential applications of synthetic data at the different stages of the software development life-cycle. Full article
Show Figures

Figure 1

20 pages, 10058 KiB  
Article
Design of Vessel Data Lakehouse with Big Data and AI Analysis Technology for Vessel Monitoring System
by Sun Park, Chan-Su Yang and JongWon Kim
Electronics 2023, 12(8), 1943; https://doi.org/10.3390/electronics12081943 - 20 Apr 2023
Cited by 12 | Viewed by 4447
Abstract
The amount of data in the maritime domain is rapidly increasing due to the increase in devices that can collect marine information, such as sensors, buoys, ships, and satellites. Maritime data is growing at an unprecedented rate, with terabytes of marine data being [...] Read more.
The amount of data in the maritime domain is rapidly increasing due to the increase in devices that can collect marine information, such as sensors, buoys, ships, and satellites. Maritime data is growing at an unprecedented rate, with terabytes of marine data being collected every month and petabytes of data already being made public. Heterogeneous marine data collected through various devices can be used in various fields such as environmental protection, defect prediction, transportation route optimization, and energy efficiency. However, it is difficult to manage vessel related data due to high heterogeneity of such marine big data. Additionally, due to the high heterogeneity of these data sources and some of the challenges associated with big data, such applications are still underdeveloped and fragmented. In this paper, we propose the Vessel Data Lakehouse architecture consisting of the Vessel Data Lake layer that can handle marine big data, the Vessel Data Warehouse layer that supports marine big data processing and AI, and the Vessel Application Services layer that supports marine application services. Our proposed a Vessel Data Lakehouse that can efficiently manage heterogeneous vessel related data. It can be integrated and managed at low cost by structuring various types of heterogeneous data using an open source-based big data framework. In addition, various types of vessel big data stored in the Data Lakehouse can be directly utilized in various types of vessel analysis services. In this paper, we present an actual use case of a vessel analysis service in a Vessel Data Lakehouse by using AIS data in Busan area. Full article
Show Figures

Figure 1

21 pages, 7652 KiB  
Article
An Image Object Detection Model Based on Mixed Attention Mechanism Optimized YOLOv5
by Guangming Sun, Shuo Wang and Jiangjian Xie
Electronics 2023, 12(7), 1515; https://doi.org/10.3390/electronics12071515 - 23 Mar 2023
Cited by 14 | Viewed by 3594
Abstract
As one of the more difficult problems in the field of computer vision, utilizing object image detection technology in a complex environment includes other key technologies, such as pattern recognition, artificial intelligence, and digital image processing. However, because an environment can be complex, [...] Read more.
As one of the more difficult problems in the field of computer vision, utilizing object image detection technology in a complex environment includes other key technologies, such as pattern recognition, artificial intelligence, and digital image processing. However, because an environment can be complex, changeable, highly different, and easily confused with the target, the target is easily affected by other factors, such as insufficient light, partial occlusion, background interference, etc., making the detection of multiple targets extremely difficult and the robustness of the algorithm low. How to make full use of the rich spatial information and deep texture information in an image to accurately identify the target type and location is an urgent problem to be solved. The emergence of deep neural networks provides an effective way for image feature extraction and full utilization. By aiming at the above problems, this paper proposes an object detection model based on the mixed attention mechanism optimization of YOLOv5 (MAO-YOLOv5). The proposed method fuses the local features and global features in an image so as to better enrich the expression ability of the feature map and more effectively detect objects with large differences in size within the image. Then, the attention mechanism is added to the feature map to weigh each channel, enhance the key features, remove the redundant features, and improve the recognition ability of the feature network towards the target object and background. The results show that the proposed network model has higher precision and a faster running speed and can perform better in object-detection tasks. Full article
(This article belongs to the Special Issue Advanced Technologies of Artificial Intelligence in Signal Processing)
Show Figures

Figure 1

30 pages, 12366 KiB  
Article
A Freehand 3D Ultrasound Reconstruction Method Based on Deep Learning
by Xin Chen, Houjin Chen, Yahui Peng, Liu Liu and Chang Huang
Electronics 2023, 12(7), 1527; https://doi.org/10.3390/electronics12071527 - 23 Mar 2023
Cited by 9 | Viewed by 6484
Abstract
In the medical field, 3D ultrasound reconstruction can visualize the internal structure of patients, which is very important for doctors to carry out correct analyses and diagnoses. Furthermore, medical 3D ultrasound images have been widely used in clinical disease diagnosis because they can [...] Read more.
In the medical field, 3D ultrasound reconstruction can visualize the internal structure of patients, which is very important for doctors to carry out correct analyses and diagnoses. Furthermore, medical 3D ultrasound images have been widely used in clinical disease diagnosis because they can more intuitively display the characteristics and spatial location information of the target. The traditional way to obtain 3D ultrasonic images is to use a 3D ultrasonic probe directly. Although freehand 3D ultrasound reconstruction is still in the research stage, a lot of research has recently been conducted on the freehand ultrasound reconstruction method based on wireless ultrasonic probe. In this paper, a wireless linear array probe is used to build a freehand acousto-optic positioning 3D ultrasonic imaging system. B-scan is considered the brightness scan. It is used for producing a 2D cross-section of the eye and its orbit. This system is used to collect and construct multiple 2D B-scans datasets for experiments. According to the experimental results, a freehand 3D ultrasonic reconstruction method based on depth learning is proposed, which is called sequence prediction reconstruction based on acoustic optical localization (SPRAO). SPRAO is an ultrasound reconstruction system which cannot be put into medical clinical use now. Compared with 3D reconstruction using a 3D ultrasound probe, SPRAO not only has a controllable scanning area, but also has a low cost. SPRAO solves some of the problems in the existing algorithms. Firstly, a 60 frames per second (FPS) B-scan sequence can be synthesized using a 12 FPS wireless ultrasonic probe through 2–3 acquisitions. It not only effectively reduces the requirement for the output frame rate of the ultrasonic probe, but also increases the moving speed of the wireless probe. Secondly, SPRAO analyzes the B-scans through speckle decorrelation to calibrate the acousto-optic auxiliary positioning information, while other algorithms have no solution to the cumulative error of the external auxiliary positioning device. Finally, long short-term memory (LSTM) is used to predict the spatial position and attitude of B-scans, and the calculation of pose deviation and speckle decorrelation is integrated into a 3D convolutional neural network (3DCNN). Prepare for real-time 3D reconstruction under the premise of accurate spatial pose of B-scans. At the end of this paper, SPRAO is compared with linear motion, IMU, speckle decorrelation, CNN and other methods. From the experimental results, it can be observed that the spatial pose deviation of B-scans output using SPRAO is the best of these methods. Full article
Show Figures

Figure 1

15 pages, 685 KiB  
Article
Knowledge-Guided Prompt Learning for Few-Shot Text Classification
by Liangguo Wang, Ruoyu Chen and Li Li
Electronics 2023, 12(6), 1486; https://doi.org/10.3390/electronics12061486 - 21 Mar 2023
Cited by 8 | Viewed by 5724
Abstract
Recently, prompt-based learning has shown impressive performance on various natural language processing tasks in few-shot scenarios. The previous study of knowledge probing showed that the success of prompt learning contributes to the implicit knowledge stored in pre-trained language models. However, how this implicit [...] Read more.
Recently, prompt-based learning has shown impressive performance on various natural language processing tasks in few-shot scenarios. The previous study of knowledge probing showed that the success of prompt learning contributes to the implicit knowledge stored in pre-trained language models. However, how this implicit knowledge helps solve downstream tasks remains unclear. In this work, we propose a knowledge-guided prompt learning method that can reveal relevant knowledge for text classification. Specifically, a knowledge prompting template and two multi-task frameworks were designed, respectively. The experiments demonstrated the superiority of combining knowledge and prompt learning in few-shot text classification. Full article
(This article belongs to the Special Issue Natural Language Processing and Information Retrieval)
Show Figures

Figure 1

19 pages, 2296 KiB  
Article
Integration of Farm Financial Accounting and Farm Management Information Systems for Better Sustainability Reporting
by Krijn Poppe, Hans Vrolijk and Ivor Bosloper
Electronics 2023, 12(6), 1485; https://doi.org/10.3390/electronics12061485 - 21 Mar 2023
Cited by 16 | Viewed by 8961
Abstract
Farmers face an increasing administrative burden as agricultural policies and certification systems of trade partners ask for more sustainability reporting. Several indicator frameworks have been developed to measure sustainability, but they often lack empirical operationalization and are not always measured at the farm [...] Read more.
Farmers face an increasing administrative burden as agricultural policies and certification systems of trade partners ask for more sustainability reporting. Several indicator frameworks have been developed to measure sustainability, but they often lack empirical operationalization and are not always measured at the farm level. The research gap we address in this paper is the empirical link between the data needs for sustainability reporting and the developments in data management at the farm level. Family farms do not collect much data for internal management, but external demand for sustainability data can partly be fulfilled by reorganizing data management in the farm office. The Farm Financial Accounts (FFAs) and Farm Management Information Systems (FMISs) are the main data sources in the farm office. They originate from the same source of note-taking by farmers but became separated when formalized and computerized. Nearly all European farms have a bank account and must keep financial accounts (e.g., for Value-Added Tax or income tax) that can be audited. Financial accounts are not designed for environmental accounting or calculating sustainability metrics but provide a wealth of information to make assessments on these subjects. FMISs are much less frequently used but collect more technical and fine-grained data at crop or enterprise level for different fields. FMISs are also strong in integrating sensor and satellite data. Integrating data availability and workflows of FFAs and FMISs makes sustainability reporting less cumbersome regarding data entry and adds valuable data to environmental accounts. This paper applies a design science approach to design an artifact, a dashboard for sustainability reporting based on the integration of information flows from farm financial accounting systems and farm management information systems. The design developed in this paper illustrates that if invoices were digitized, most data-gathering needed for external sustainability reporting would automatically be done when the invoices is paid by a bank transfer. Data on the use of inputs and production could be added with procedures as in current FMISs, but with less data entry, fewer risks of differences in outcomes, and possibilities of cross-checking the results. Full article
Show Figures

Figure 1

13 pages, 2089 KiB  
Article
Serious Games and Soft Skills in Higher Education: A Case Study of the Design of Compete!
by Nadia McGowan, Aída López-Serrano and Daniel Burgos
Electronics 2023, 12(6), 1432; https://doi.org/10.3390/electronics12061432 - 17 Mar 2023
Cited by 16 | Viewed by 5854
Abstract
This article describes the serious game Compete!, developed within the European Erasmus+ framework, that aims to teach soft skills to higher education students in order to increase their employability. Despite the increasing relevance of soft skills for successful entry into the labour [...] Read more.
This article describes the serious game Compete!, developed within the European Erasmus+ framework, that aims to teach soft skills to higher education students in order to increase their employability. Despite the increasing relevance of soft skills for successful entry into the labour market, these are often overlooked in higher education. A participatory learning methodology based on a gamification tool has been used for this purpose. The game presents a series of scenarios describing social sustainability problems that require the application of soft skills identified as key competencies in a field study across different European countries. These competencies are creative problem-solving, effective communication, stress management, and teamwork. On completion of each game scenario and the game itself, students receive an evaluation of both their soft skills and the strategic and operational decisions they have made. In the evaluation of these decisions, both the economic and sustainability aspects of the decision are assessed. The teacher can then address the competencies and sustainability issues using the different game scenarios, thus creating higher motivation and deeper understanding amongst the students. This hybrid learning methodology incorporates digital tools for the cross-curricular teaching and learning of sustainability and soft skills. In conclusion, this article describes a possible method of incorporating soft skills in higher education; this complements students’ technical knowledge while helping to achieve Sustainable Development Goals. Full article
Show Figures

Figure 1

14 pages, 25964 KiB  
Article
Deep-Learning-Based Scalp Image Analysis Using Limited Data
by Minjeong Kim, Yujung Gil, Yuyeon Kim and Jihie Kim
Electronics 2023, 12(6), 1380; https://doi.org/10.3390/electronics12061380 - 14 Mar 2023
Cited by 13 | Viewed by 8136
Abstract
The World Health Organization and Korea National Health Insurance assert that the number of alopecia patients is increasing every year, and approximately 70 percent of adults suffer from scalp problems. Although alopecia is a genetic problem, it is difficult to diagnose at an [...] Read more.
The World Health Organization and Korea National Health Insurance assert that the number of alopecia patients is increasing every year, and approximately 70 percent of adults suffer from scalp problems. Although alopecia is a genetic problem, it is difficult to diagnose at an early stage. Although deep-learning-based approaches have been effective for medical image analyses, it is challenging to generate deep learning models for alopecia detection and analysis because creating an alopecia image dataset is challenging. In this paper, we present an approach for generating a model specialized for alopecia analysis that achieves high accuracy by applying data preprocessing, data augmentation, and an ensemble of deep learning models that have been effective for medical image analyses. We use an alopecia image dataset containing 526 good, 13,156 mild, 3742 moderate, and 825 severe alopecia images. The dataset was further augmented by applying normalization, geometry-based augmentation (rotate, vertical flip, horizontal flip, crop, and affine transformation), and PCA augmentation. We compare the performance of a single deep learning model using ResNet, ResNeXt, DenseNet, XceptionNet, and ensembles of these models. The best result was achieved when DenseNet, XceptionNet, and ResNet were combined to achieve an accuracy of 95.75 and an F1 score of 87.05. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

19 pages, 8478 KiB  
Article
FE-GAN: Fast and Efficient Underwater Image Enhancement Model Based on Conditional GAN
by Jie Han, Jian Zhou, Lin Wang, Yu Wang and Zhongjun Ding
Electronics 2023, 12(5), 1227; https://doi.org/10.3390/electronics12051227 - 4 Mar 2023
Cited by 15 | Viewed by 3963
Abstract
The processing of underwater images can vastly ease the difficulty of underwater robots’ tasks and promote ocean exploration development. This paper proposes a fast and efficient underwater image enhancement model based on conditional GAN with good generalization ability using aggregation strategies and concatenate [...] Read more.
The processing of underwater images can vastly ease the difficulty of underwater robots’ tasks and promote ocean exploration development. This paper proposes a fast and efficient underwater image enhancement model based on conditional GAN with good generalization ability using aggregation strategies and concatenate operations to take full advantage of the limited hierarchical features. A sequential network can avoid frequently visiting additional nodes, which is beneficial for speeding up inference and reducing memory consumption. Through the structural re-parameterization approach, we design a dual residual block (DRB) and accordingly construct a hierarchical attention encoder (HAE), which can extract sufficient feature and texture information from different levels of an image, and with 11.52% promotion in GFLOPs. Extensive experiments were carried out on real and artificially synthesized benchmark underwater image datasets, and qualitative and quantitative comparisons with state-of-the-art methods were implemented. The results show that our model produces better images, and has good generalization ability and real-time performance, which is more conducive to the practical application of underwater robot tasks. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

15 pages, 557 KiB  
Article
WCC-JC 2.0: A Web-Crawled and Manually Aligned Parallel Corpus for Japanese-Chinese Neural Machine Translation
by Jinyi Zhang, Ye Tian, Jiannan Mao, Mei Han, Feng Wen, Cong Guo, Zhonghui Gao and Tadahiro Matsumoto
Electronics 2023, 12(5), 1140; https://doi.org/10.3390/electronics12051140 - 26 Feb 2023
Cited by 8 | Viewed by 3518
Abstract
Movie and TV subtitles are frequently employed in natural language processing (NLP) applications, but there are limited Japanese-Chinese bilingual corpora accessible as a dataset to train neural machine translation (NMT) models. In our previous study, we effectively constructed a corpus of a considerable [...] Read more.
Movie and TV subtitles are frequently employed in natural language processing (NLP) applications, but there are limited Japanese-Chinese bilingual corpora accessible as a dataset to train neural machine translation (NMT) models. In our previous study, we effectively constructed a corpus of a considerable size containing bilingual text data in both Japanese and Chinese by collecting subtitle text data from websites that host movies and television series. The unsatisfactory translation performance of the initial corpus, Web-Crawled Corpus of Japanese and Chinese (WCC-JC 1.0), was predominantly caused by the limited number of sentence pairs. To address this shortcoming, we thoroughly analyzed the issues associated with the construction of WCC-JC 1.0 and constructed the WCC-JC 2.0 corpus by first collecting subtitle data from movie and TV series websites. Then, we manually aligned a large number of high-quality sentence pairs. Our efforts resulted in a new corpus that includes about 1.4 million sentence pairs, an 87% increase compared with WCC-JC 1.0. As a result, WCC-JC 2.0 is now among the largest publicly available Japanese-Chinese bilingual corpora in the world. To assess the performance of WCC-JC 2.0, we calculated the BLEU scores relative to other comparative corpora and performed manual evaluations of the translation results generated by translation models trained on WCC-JC 2.0. We provide WCC-JC 2.0 as a free download for research purposes only. Full article
(This article belongs to the Special Issue Natural Language Processing and Information Retrieval)
Show Figures

Figure 1

14 pages, 5425 KiB  
Article
Swin-UperNet: A Semantic Segmentation Model for Mangroves and Spartina alterniflora Loisel Based on UperNet
by Zhenhua Wang, Jing Li, Zhilian Tan, Xiangfeng Liu and Mingjie Li
Electronics 2023, 12(5), 1111; https://doi.org/10.3390/electronics12051111 - 24 Feb 2023
Cited by 14 | Viewed by 5379
Abstract
As an ecosystem in transition from land to sea, mangroves play a vital role in wind and wave protection and biodiversity maintenance. However, the invasion of Spartina alterniflora Loisel seriously damages the mangrove wetland ecosystem. To protect mangroves scientifically and dynamically, a semantic [...] Read more.
As an ecosystem in transition from land to sea, mangroves play a vital role in wind and wave protection and biodiversity maintenance. However, the invasion of Spartina alterniflora Loisel seriously damages the mangrove wetland ecosystem. To protect mangroves scientifically and dynamically, a semantic segmentation model for mangroves and Spartina alterniflora Loise was proposed based on UperNet (Swin-UperNet). In the proposed Swin-UperNet model, a data concatenation module was proposed to make full use of the multispectral information of remote sensing images, the backbone network was replaced with a Swin transformer to improve the feature extraction capability, and a boundary optimization module was designed to optimize the rough segmentation results. Additionally, a linear combination of cross-entropy loss and Lovasz-Softmax loss was taken as the loss function of Swin-UperNet, which could address the problem of unbalanced sample distribution. Taking GF-1 and GF-6 images as the experiment data, the performance of the Swin-UperNet model was compared against that of other segmentation models in terms of pixel accuracy (PA), mean intersection over union (mIoU), and frames per second (FPS), including PSPNet, PSANet, DeepLabv3, DANet, FCN, OCRNet, and DeepLabv3+. The results showed that the Swin-UperNet model achieved the best PA of 98.87% and mIoU of 90.0%, and the efficiency of the Swin-UperNet model was higher than that of most models. In conclusion, Swin-UperNet is an efficient and accurate model for mangrove and Spartina alterniflora Loise segmentation synchronously, which will provide a scientific basis for Spartina alterniflora Loise monitoring and mangrove resource conservation and management. Full article
(This article belongs to the Special Issue Applications of Deep Neural Network for Smart City)
Show Figures

Figure 1

16 pages, 417 KiB  
Article
Fuzzy Rough Nearest Neighbour Methods for Aspect-Based Sentiment Analysis
by Olha Kaminska, Chris Cornelis and Veronique Hoste
Electronics 2023, 12(5), 1088; https://doi.org/10.3390/electronics12051088 - 22 Feb 2023
Cited by 6 | Viewed by 3211
Abstract
Fine-grained sentiment analysis, known as Aspect-Based Sentiment Analysis (ABSA), establishes the polarity of a section of text concerning a particular aspect. Aspect, sentiment, and emotion categorisation are the three steps that make up the configuration of ABSA, which we looked into for the [...] Read more.
Fine-grained sentiment analysis, known as Aspect-Based Sentiment Analysis (ABSA), establishes the polarity of a section of text concerning a particular aspect. Aspect, sentiment, and emotion categorisation are the three steps that make up the configuration of ABSA, which we looked into for the dataset of English reviews. In this work, due to the fuzzy nature of textual data, we investigated machine learning methods based on fuzzy rough sets, which we believe are more interpretable than complex state-of-the-art models. The novelty of this paper is the use of a pipeline that incorporates all three mentioned steps and applies Fuzzy-Rough Nearest Neighbour classification techniques with their extension based on ordered weighted average operators (FRNN-OWA), combined with text embeddings based on transformers. After some improvements in the pipeline’s stages, such as using two separate models for emotion detection, we obtain the correct results for the majority of test instances (up to 81.4%) for all three classification tasks. We consider three different options for the pipeline. In two of them, all three classification tasks are performed consecutively, reducing data at each step to retain only correct predictions, while the third option performs each step independently. This solution allows us to examine the prediction results after each step and spot certain patterns. We used it for an error analysis that enables us, for each test instance, to identify the neighbouring training samples and demonstrate that our methods can extract useful patterns from the data. Finally, we compare our results with another paper that performed the same ABSA classification for the Dutch version of the dataset and conclude that our results are in line with theirs or even slightly better. Full article
(This article belongs to the Special Issue AI for Text Understanding)
Show Figures

Figure 1

17 pages, 781 KiB  
Article
Distilling Monolingual Models from Large Multilingual Transformers
by Pranaydeep Singh, Orphée De Clercq and Els Lefever
Electronics 2023, 12(4), 1022; https://doi.org/10.3390/electronics12041022 - 18 Feb 2023
Cited by 4 | Viewed by 4362
Abstract
Although language modeling has been trending upwards steadily, models available for low-resourced languages are limited to large multilingual models such as mBERT and XLM-RoBERTa, which come with significant overheads for deployment vis-à-vis their model size, inference speeds, etc. We attempt to tackle this [...] Read more.
Although language modeling has been trending upwards steadily, models available for low-resourced languages are limited to large multilingual models such as mBERT and XLM-RoBERTa, which come with significant overheads for deployment vis-à-vis their model size, inference speeds, etc. We attempt to tackle this problem by proposing a novel methodology to apply knowledge distillation techniques to filter language-specific information from a large multilingual model into a small, fast monolingual model that can often outperform the teacher model. We demonstrate the viability of this methodology on two downstream tasks each for six languages. We further dive into the possible modifications to the basic setup for low-resourced languages by exploring ideas to tune the final vocabulary of the distilled models. Lastly, we perform a detailed ablation study to understand the different components of the setup better and find out what works best for the two under-resourced languages, Swahili and Slovene. Full article
(This article belongs to the Special Issue AI for Text Understanding)
Show Figures

Figure 1

17 pages, 3166 KiB  
Article
A Framework for Understanding Unstructured Financial Documents Using RPA and Multimodal Approach
by Seongkuk Cho, Jihoon Moon, Junhyeok Bae, Jiwon Kang and Sangwook Lee
Electronics 2023, 12(4), 939; https://doi.org/10.3390/electronics12040939 - 13 Feb 2023
Cited by 7 | Viewed by 5011
Abstract
The financial business process worldwide suffers from huge dependencies upon labor and written documents, thus making it tedious and time-consuming. In order to solve this problem, traditional robotic process automation (RPA) has recently been developed into a hyper-automation solution by combining computer vision [...] Read more.
The financial business process worldwide suffers from huge dependencies upon labor and written documents, thus making it tedious and time-consuming. In order to solve this problem, traditional robotic process automation (RPA) has recently been developed into a hyper-automation solution by combining computer vision (CV) and natural language processing (NLP) methods. These solutions are capable of image analysis, such as key information extraction and document classification. However, they could improve on text-rich document images and require much training data for processing multilingual documents. This study proposes a multimodal approach-based intelligent document processing framework that combines a pre-trained deep learning model with traditional RPA used in banks to automate business processes from real-world financial document images. The proposed framework can perform classification and key information extraction on a small amount of training data and analyze multilingual documents. In order to evaluate the effectiveness of the proposed framework, extensive experiments were conducted using Korean financial document images. The experimental results show the superiority of the multimodal approach for understanding financial documents and demonstrate that adequate labeling can improve performance by up to about 15%. Full article
(This article belongs to the Special Issue Applied AI-Based Platform Technology and Application, Volume II)
Show Figures

Figure 1

20 pages, 729 KiB  
Article
A Benchmark for Dutch End-to-End Cross-Document Event Coreference Resolution
by Loic De Langhe, Thierry Desot, Orphée De Clercq and Veronique Hoste
Electronics 2023, 12(4), 850; https://doi.org/10.3390/electronics12040850 - 8 Feb 2023
Cited by 4 | Viewed by 2077
Abstract
In this paper, we present a benchmark result for end-to-end cross-document event coreference resolution in Dutch. First, the state of the art of this task in other languages is introduced, as well as currently existing resources and commonly used evaluation metrics. We then [...] Read more.
In this paper, we present a benchmark result for end-to-end cross-document event coreference resolution in Dutch. First, the state of the art of this task in other languages is introduced, as well as currently existing resources and commonly used evaluation metrics. We then build on recently published work to fully explore end-to-end event coreference resolution for the first time in the Dutch language domain. For this purpose, two well-performing transformer-based algorithms for the respective detection and coreference resolution of Dutch textual events are combined in a pipeline architecture and compared to baseline scores relying on feature-based methods. The results are promising and comparable to similar studies in higher-resourced languages; however, they also reveal that in this specific NLP domain, much work remains to be done. In order to gain more insights, an in-depth analysis of the two pipeline components is carried out to highlight and overcome possible shortcoming of the current approach and provide suggestions for future work. Full article
(This article belongs to the Special Issue AI for Text Understanding)
Show Figures

Figure 1

11 pages, 3028 KiB  
Article
Merchant Recommender System Using Credit Card Payment Data
by Suyoun Yoo and Jaekwang Kim
Electronics 2023, 12(4), 811; https://doi.org/10.3390/electronics12040811 - 6 Feb 2023
Cited by 2 | Viewed by 3984
Abstract
As the size of the domestic credit card market is steadily growing, the marketing method for credit card companies to secure customers is also changing. The process of understanding individual preferences and payment patterns has become an essential element, and it has developed [...] Read more.
As the size of the domestic credit card market is steadily growing, the marketing method for credit card companies to secure customers is also changing. The process of understanding individual preferences and payment patterns has become an essential element, and it has developed a sophisticated personalized marketing method to properly understand customers’ interests and meet their needs. Based on this, a personalized system that recommends products or stores suitable for customers acts to attract customers more effectively. However, the existing research model implementing the General Framework using the neural network cannot reflect the major domain information of credit card payment data when applied directly to store recommendations. This study intends to propose a model specializing in the recommendation of member stores by reflecting the domain information of credit card payment data. The customers’ gender and age information were added to the learning data. The industry category and region information of the settlement member stores were reconstructed to be learned together with interaction data. A personalized recommendation system was realized by combining historical card payment data with customer and member store information to recommend member stores that are highly likely to be used by customers in the future. This study’s proposed model (NMF_CSI) showed a performance improvement of 3% based on HR@10 and 5% based on NDCG@10, compared to previous models. In addition, customer coverage was expanded so that the recommended model can be applied not only to customers actively using credit cards but also to customers with low usage data. Full article
(This article belongs to the Special Issue Application of Machine Learning and Intelligent Systems)
Show Figures

Figure 1

30 pages, 19008 KiB  
Article
Automated Pre-Play Analysis of American Football Formations Using Deep Learning
by Jacob Newman, Andrew Sumsion, Shad Torrie and Dah-Jye Lee
Electronics 2023, 12(3), 726; https://doi.org/10.3390/electronics12030726 - 1 Feb 2023
Cited by 12 | Viewed by 10263
Abstract
Annotation and analysis of sports videos is a time-consuming task that, once automated, will provide benefits to coaches, players, and spectators. American football, as the most watched sport in the United States, could especially benefit from this automation. Manual annotation and analysis of [...] Read more.
Annotation and analysis of sports videos is a time-consuming task that, once automated, will provide benefits to coaches, players, and spectators. American football, as the most watched sport in the United States, could especially benefit from this automation. Manual annotation and analysis of recorded videos of American football games is an inefficient and tedious process. Currently, most college football programs focus on annotating offensive formations to help them develop game plans for their upcoming games. As a first step to further research for this unique application, we use computer vision and deep learning to analyze an overhead image of a football play immediately before the play begins. This analysis consists of locating individual football players and labeling their position or roles, as well as identifying the formation of the offensive team. We obtain greater than 90% accuracy on both player detection and labeling, and 84.8% accuracy on formation identification. These results prove the feasibility of building a complete American football strategy analysis system using artificial intelligence. Collecting a larger dataset in real-world situations will enable further improvements. This would likewise enable American football teams to analyze game footage quickly. Full article
(This article belongs to the Special Issue Advances of Artificial Intelligence and Vision Applications)
Show Figures

Figure 1

21 pages, 1866 KiB  
Article
Towards Deploying DNN Models on Edge for Predictive Maintenance Applications
by Rick Pandey, Sebastian Uziel, Tino Hutschenreuther and Silvia Krug
Electronics 2023, 12(3), 639; https://doi.org/10.3390/electronics12030639 - 27 Jan 2023
Cited by 14 | Viewed by 2720
Abstract
Almost all rotating machinery in the industry has bearings as their key building block and most of these machines run 24 × 7. This makes bearing health prediction an active research area for predictive maintenance solutions. Many state of the art Deep Neural [...] Read more.
Almost all rotating machinery in the industry has bearings as their key building block and most of these machines run 24 × 7. This makes bearing health prediction an active research area for predictive maintenance solutions. Many state of the art Deep Neural Network (DNN) models have been proposed to solve this. However, most of these high performance models are computationally expensive and have high memory requirements. This limits their use to very specific industrial applications with powerful hardwares deployed close the the machinery. In order to bring DNN-based solutions to a potential use in the industry, we need to deploy these models on Microcontroller Units (MCUs) which are cost effective and energy efficient. However, this step is typically neglected in literature as it poses new challenges. The primary concern when inferencing the DNN models on MCUs is the on chip memory of the MCU that has to fit the model, the data and additional code to run the system. Almost all the state of the art models fail this litmus test since they feature too many parameters. In this paper, we show the challenges related to the deployment, review possible solutions and evaluate one of them showing how the deployment can be realized and what steps are needed. The focus is on the steps required for the actual deployment rather than finding the optimal solution. This paper is among the first to show the deployment on MCUs for a predictive maintenance use case. We first analyze the gap between State Of The Art benchmark DNN models for bearing defect classification and the memory constraint of two MCU variants. Additionally, we review options to reduce the model size such as pruning and quantization. Afterwards, we evaluate a solution to deploy the DNN models by pruning them in order to fit them into microcontrollers. Our results show that most models under test can be reduced to fit MCU memory for a maximum loss of 3% in average accuracy of the pruned models in comparison to the original models. Based on the results, we also discuss which methods are promising and which combination of model and feature work best for the given classification problem. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

17 pages, 4634 KiB  
Article
Visualization Technology and Deep-Learning for Multilingual Spam Message Detection
by Hwabin Lee, Sua Jeong, Seogyeong Cho and Eunjung Choi
Electronics 2023, 12(3), 582; https://doi.org/10.3390/electronics12030582 - 24 Jan 2023
Cited by 17 | Viewed by 4590
Abstract
Spam detection is an essential and unavoidable problem in today’s society. Most of the existing studies have used string-based detection methods with models and have been conducted on a single language, especially with English datasets. However, in the current global society, research on [...] Read more.
Spam detection is an essential and unavoidable problem in today’s society. Most of the existing studies have used string-based detection methods with models and have been conducted on a single language, especially with English datasets. However, in the current global society, research on languages other than English is needed. String-based spam detection methods perform different preprocessing steps depending on language type due to differences in grammatical characteristics. Therefore, our study proposes a text-processing method and a string-imaging method. The CNN 2D visualization technology used in this paper can be applied to datasets of various languages by processing the data as images, so they can be equally applied to languages other than English. In this study, English and Korean spam data were used. As a result of this study, the string-based detection models of RNN, LSTM, and CNN 1D showed average accuracies of 0.9871, 0.9906, and 0.9912, respectively. On the other hand, the CNN 2D image-based detection model was confirmed to have an average accuracy of 0.9957. Through this study, we present a solution that shows that image-based processing is more effective than string-based processing for string data and that multilingual processing is possible based on the CNN 2D model. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

17 pages, 5424 KiB  
Article
TMD-BERT: A Transformer-Based Model for Transportation Mode Detection
by Ifigenia Drosouli, Athanasios Voulodimos, Paris Mastorocostas, Georgios Miaoulis and Djamchid Ghazanfarpour
Electronics 2023, 12(3), 581; https://doi.org/10.3390/electronics12030581 - 24 Jan 2023
Cited by 10 | Viewed by 4638
Abstract
Aiming to differentiate various transportation modes and detect the means of transport an individual uses, is the focal point of transportation mode detection, one of the problems in the field of intelligent transport which receives the attention of researchers because of its interesting [...] Read more.
Aiming to differentiate various transportation modes and detect the means of transport an individual uses, is the focal point of transportation mode detection, one of the problems in the field of intelligent transport which receives the attention of researchers because of its interesting and useful applications. In this paper, we present TMD-BERT, a transformer-based model for transportation mode detection based on sensor data. The proposed transformer-based approach processes the entire sequence of data, understand the importance of each part of the input sequence and assigns weights accordingly, using attention mechanisms, to learn global dependencies in the sequence. The experimental evaluation shows the high performance of the model compared to the state of the art, demonstrating a prediction accuracy of 98.8%. Full article
(This article belongs to the Special Issue Deep Learning for Computer Vision and Pattern Recognition)
Show Figures

Figure 1

23 pages, 948 KiB  
Article
A Truthful and Reliable Incentive Mechanism for Federated Learning Based on Reputation Mechanism and Reverse Auction
by Ao Xiong, Yu Chen, Hao Chen, Jiewei Chen, Shaojie Yang, Jianping Huang, Zhongxu Li and Shaoyong Guo
Electronics 2023, 12(3), 517; https://doi.org/10.3390/electronics12030517 - 19 Jan 2023
Cited by 9 | Viewed by 4225
Abstract
As a distributed machine learning paradigm, federated learning (FL) enables participating clients to share only model gradients instead of local data and achieves the secure sharing of private data. However, the lack of clients’ willingness to participate in FL and the malicious influence [...] Read more.
As a distributed machine learning paradigm, federated learning (FL) enables participating clients to share only model gradients instead of local data and achieves the secure sharing of private data. However, the lack of clients’ willingness to participate in FL and the malicious influence of unreliable clients both seriously degrade the performance of FL. The current research on the incentive mechanism of FL lacks the accurate assessment of clients’ truthfulness and reliability, and the incentive mechanism based on untruthful and unreliable clients is unreliable and inefficient. To solve this problem, we propose an incentive mechanism based on the reputation mechanism and reverse auction to achieve a more truthful, more reliable, and more efficient FL. First, we introduce the reputation mechanism to measure clients’ truthfulness and reliability through multiple reputation evaluations and design a reliable client selection scheme. Then the reverse auction is introduced to select the optimal clients that maximize the social surplus while satisfying individual rationality, incentive compatibility, and weak budget balance. Extensive experimental results demonstrate that this incentive mechanism can motivate more clients with high-quality data and high reputations to participate in FL with less cost, which increases the FL tasks’ economic benefit by 31% and improves the accuracy from 0.9356 to 0.9813, and then promote the efficient and stable development of the FL service trading market. Full article
(This article belongs to the Special Issue Artificial Intelligence Technologies and Applications)
Show Figures

Figure 1

19 pages, 13462 KiB  
Article
Robust and Lightweight Deep Learning Model for Industrial Fault Diagnosis in Low-Quality and Noisy Data
by Jaegwang Shin and Suan Lee
Electronics 2023, 12(2), 409; https://doi.org/10.3390/electronics12020409 - 13 Jan 2023
Cited by 8 | Viewed by 3568
Abstract
Machines in factories are typically operated 24 h a day to support production, which may result in malfunctions. Such mechanical malfunctions may disrupt factory output, resulting in financial losses or human casualties. Therefore, we investigate a deep learning model that can detect abnormalities [...] Read more.
Machines in factories are typically operated 24 h a day to support production, which may result in malfunctions. Such mechanical malfunctions may disrupt factory output, resulting in financial losses or human casualties. Therefore, we investigate a deep learning model that can detect abnormalities in machines based on the operating noise. Various data preprocessing methods, including the discrete wavelet transform, the Hilbert transform, and short-time Fourier transform, were applied to extract characteristics from machine-operating noises. To create a model that can be used in factories, the environment of real factories was simulated by introducing noise and quality degradation to the sound dataset for Malfunctioning Industrial Machine Investigation and Inspection (MIMII). Thus, we proposed a lightweight model that runs reliably even in noisy and low-quality sound data environments, such as a real factory. We propose a Convolutional Neural Network–Long Short-Term Memory (CNN–LSTM) model using Short-Time Fourier Transforms (STFTs), and the proposed model can be very effective in terms of application because it is a lightweight model that requires only about 6.6% of the number of parameters used in the underlying CNN, and has only a performance difference within 0.5%. Full article
(This article belongs to the Special Issue Application Research Using AI, IoT, HCI, and Big Data Technologies)
Show Figures

Figure 1

14 pages, 3740 KiB  
Article
Precise Identification of Food Smells to Enable Human–Computer Interface for Digital Smells
by Yaonian Li, Zhenyi Ye and Qiliang Li
Electronics 2023, 12(2), 418; https://doi.org/10.3390/electronics12020418 - 13 Jan 2023
Cited by 4 | Viewed by 3260
Abstract
Food safety technologies are important in maintaining physical health for everyone. It is important to digitize the scents of foods to enable an effective human–computer interface for smells. In this work, an intelligent gas-sensing system is designed and integrated to capture the smells [...] Read more.
Food safety technologies are important in maintaining physical health for everyone. It is important to digitize the scents of foods to enable an effective human–computer interface for smells. In this work, an intelligent gas-sensing system is designed and integrated to capture the smells of food and convert them into digital scents. Fruit samples are used for testing as they release volatile organic components (VOCs) which can be detected by the gas sensors in the system. Decision tree, principal component analysis (PCA), linear discriminant analysis (LDA), and one-dimensional convolutional neural network (1D-CNN) algorithms were adopted and optimized to analyze and precisely classify the sensor responses. Furthermore, the proposed system and data processing algorithms can be used to precisely identify the digital scents and monitor the decomposition dynamics of different foods. Such a promising technology is important for mutual understanding between humans and computers to enable an interface for digital scents, which is very attractive for food identification and safety monitoring. Full article
(This article belongs to the Special Issue Real-Time Visual Information Processing in Human-Computer Interface)
Show Figures

Figure 1

14 pages, 3203 KiB  
Article
Application of Metal Oxide Memristor Models in Logic Gates
by Valeri Mladenov
Electronics 2023, 12(2), 381; https://doi.org/10.3390/electronics12020381 - 11 Jan 2023
Cited by 6 | Viewed by 3602
Abstract
Memristors, as new electronic elements, have been under rigorous study in recent years, owing to their good memory and switching properties, low power consumption, nano-dimensions and a good compatibility to present integrated circuits, related to their promising applications in electronic circuits and chips. [...] Read more.
Memristors, as new electronic elements, have been under rigorous study in recent years, owing to their good memory and switching properties, low power consumption, nano-dimensions and a good compatibility to present integrated circuits, related to their promising applications in electronic circuits and chips. The main purpose of this paper is the application and analysis of the operations of metal–oxide memristors in logic gates and complex schemes, using several standard and modified memristor models and a comparison between their behavior in LTSPICE at a hard-switching, paying attention to their fast operation and switching properties. Several basic logic gates—OR, AND, NOR, NAND, XOR, based on memristors and CMOS transistors are considered. The logic schemes based on memristors are applicable in electronic circuits with artificial intelligence. They are analyzed in LTSPICE for pulse signals and a hard-switching functioning of the memristors. The analyses confirm the proper, fast operation and good switching properties of the considered modified memristor models in logical circuits, compared to several standard models. The modified models are compared to several classical models, according to some significant criteria such as operating frequency, simulation time, accuracy, complexity and switching properties. Based on the basic memristor logic gates, a more complex logic scheme is analyzed. Full article
(This article belongs to the Section Artificial Intelligence Circuits and Systems (AICAS))
Show Figures

Figure 1

17 pages, 1739 KiB  
Article
HW-ADAM: FPGA-Based Accelerator for Adaptive Moment Estimation
by Weiyi Zhang, Liting Niu, Debing Zhang, Guangqi Wang, Fasih Ud Din Farrukh and Chun Zhang
Electronics 2023, 12(2), 263; https://doi.org/10.3390/electronics12020263 - 4 Jan 2023
Cited by 12 | Viewed by 3671
Abstract
The selection of the optimizer is critical for convergence in the field of on-chip training. As one second moment optimizer, adaptive moment estimation (ADAM) shows a significant advantage compared with non-moment optimizers such as stochastic gradient descent (SGD) and first-moment optimizers such as [...] Read more.
The selection of the optimizer is critical for convergence in the field of on-chip training. As one second moment optimizer, adaptive moment estimation (ADAM) shows a significant advantage compared with non-moment optimizers such as stochastic gradient descent (SGD) and first-moment optimizers such as Momentum. However, ADAM is hard to implement on hardware due to the computationally intensive operations, including square, root extraction, and division. This work proposed Hardware-ADAM (HW-ADAM), an efficient fixed-point accelerator for ADAM highlighting hardware-oriented mathematical optimizations. HW-ADAM has two designs: Efficient-ADAM (E-ADAM) unit reduced the hardware resource consumption by around 90% compared with the related work. E-ADAM achieved a throughput of 2.89 MUOP/s (Million Updating Operation per Second), which is 2.8× of the original ADAM. Fast-ADAM (F-ADAM) unit reduced 91.5% flip-flops, 65.7% look-up tables, and 50% DSPs compared with the related work. The F-ADAM unit achieved a throughput of 16.7 MUOP/s, which is 16.4× of the original ADAM. Full article
(This article belongs to the Section Artificial Intelligence Circuits and Systems (AICAS))
Show Figures

Figure 1

21 pages, 4540 KiB  
Article
FPSNET: An Architecture for Neural-Network-Based Feature Point Extraction for SLAM
by Fasih Ud Din Farrukh, Weiyi Zhang, Chun Zhang, Zhihua Wang and Hanjun Jiang
Electronics 2022, 11(24), 4168; https://doi.org/10.3390/electronics11244168 - 13 Dec 2022
Cited by 2 | Viewed by 2025
Abstract
The hardware architecture of a deep-neural-network-based feature point extraction method is proposed for the simultaneous localization and mapping (SLAM) in robotic applications, which is named the Feature Point based SLAM Network (FPSNET). Some key techniques are deployed to improve the hardware and power [...] Read more.
The hardware architecture of a deep-neural-network-based feature point extraction method is proposed for the simultaneous localization and mapping (SLAM) in robotic applications, which is named the Feature Point based SLAM Network (FPSNET). Some key techniques are deployed to improve the hardware and power efficiency. The data path is devised to reduce overall off-chip memory accesses. The intermediate data and partial sums resulting in the convolution process are stored in available on-chip memories, and optimized hardware is employed to compute the one-point activation function. Meanwhile, address generation units are used to avoid data overlapping in memories. The proposed FPSNET has been designed in 65 nm CMOS technology with a core area of 8.3 mm2. This work reduces the memory overhead by 50% compared to traditional data storage for activation and overall by 35% for on-chip memories. The synthesis and simulation results show that it achieved a 2.0× higher performance compared with the previous design while achieving a power efficiency of 1.0 TOPS/W, which is 2.4× better than previous work. Compared to other ASIC designs with close peak throughput and power efficiency performance, the presented FPSNET has the smallest chip area (at least 42.4% reduction). Full article
(This article belongs to the Section Artificial Intelligence Circuits and Systems (AICAS))
Show Figures

Figure 1

10 pages, 3473 KiB  
Article
Advances in Ku-Band GaN Single Chip Front End for Space SARs: From System Specifications to Technology Selection
by Francesco Scappaviva, Gianni Bosi, Andrea Biondi, Sara D’Angelo, Luca Cariani, Valeria Vadalà, Antonio Raffo, Davide Resca, Elisa Cipriani and Giorgio Vannini
Electronics 2022, 11(19), 2998; https://doi.org/10.3390/electronics11192998 - 21 Sep 2022
Cited by 5 | Viewed by 5381
Abstract
In this paper, a single-chip front-end (SCFE) operating in Ku-band (12–17 GHz) is presented. It is designed exploiting a GaN on SiC technology featured by 150 nm gate length provided by UMS foundry. This MMIC integrates high power and low noise amplification functions [...] Read more.
In this paper, a single-chip front-end (SCFE) operating in Ku-band (12–17 GHz) is presented. It is designed exploiting a GaN on SiC technology featured by 150 nm gate length provided by UMS foundry. This MMIC integrates high power and low noise amplification functions enabled by a single-pole double-throw (SPDT) switch, occupying a total area of 20 mm2. The transmitting chain (Tx) presents a 39 dBm output power, a power added efficiency (PAE) higher than 30% and a 22 dB power gain. The receive path (Rx) offers a low noise figure (NF) lower than 2.8 dB with 25 dB of linear gain. The Rx port output power leakage is limited on chip to be below 15 dBm even at high compression levels. Finally, a complete characterization of the SCFE in the Rx and Tx modes is presented, also showing the measurement of the recovery time in the presence of large-signal interferences. Full article
(This article belongs to the Special Issue Power Amplifier for Wireless Communication)
Show Figures

Figure 1

9 pages, 4577 KiB  
Article
High-Performance and Robust Binarized Neural Network Accelerator Based on Modified Content-Addressable Memory
by Sureum Choi, Youngjun Jeon and Yeongkyo Seo
Electronics 2022, 11(17), 2780; https://doi.org/10.3390/electronics11172780 - 3 Sep 2022
Cited by 1 | Viewed by 2691
Abstract
The binarized neural network (BNN) is one of the most promising candidates for low-cost convolutional neural networks (CNNs). This is because of its significant reduction in memory and computational costs, and reasonable classification accuracy. Content-addressable memory (CAM) can perform binarized convolution operations efficiently [...] Read more.
The binarized neural network (BNN) is one of the most promising candidates for low-cost convolutional neural networks (CNNs). This is because of its significant reduction in memory and computational costs, and reasonable classification accuracy. Content-addressable memory (CAM) can perform binarized convolution operations efficiently since the bitwise comparison in CAM matches well with the binarized multiply operation in a BNN. However, a significant design issue in CAM-based BNN accelerators is that the operational reliability is severely degraded by process variations during match-line (ML) sensing operations. In this paper, we proposed a novel ML sensing scheme to reduce the hardware error probability. Most errors occur when the difference between the number of matches in the evaluation ML and the reference ML is small; thus, the proposed hardware identified cases that are vulnerable to process variations using dual references. The proposed dual-reference sensing structure has >49% less ML sensing errors than that of the conventional design, leading to a >1.0% accuracy improvement for Fashion MNIST image classification. In addition, owing to the parallel convolution operation of the CAM-based BNN accelerator, the proposed hardware achieved >34% processing-time improvement compared with that of the digital logic implementation. Full article
(This article belongs to the Section Artificial Intelligence Circuits and Systems (AICAS))
Show Figures

Figure 1

19 pages, 8191 KiB  
Article
Single-Objective Particle Swarm Optimization-Based Chaotic Image Encryption Scheme
by Jingya Wang, Xianhua Song and Ahmed A. Abd El-Latif
Electronics 2022, 11(16), 2628; https://doi.org/10.3390/electronics11162628 - 22 Aug 2022
Cited by 22 | Viewed by 3377
Abstract
High security has always been the ultimate goal of image encryption, and the closer the ciphertext image is to the true random number, the higher the security. Aiming at popular chaotic image encryption methods, particle swarm optimization (PSO) is studied to select the [...] Read more.
High security has always been the ultimate goal of image encryption, and the closer the ciphertext image is to the true random number, the higher the security. Aiming at popular chaotic image encryption methods, particle swarm optimization (PSO) is studied to select the parameters and initial values of chaotic systems so that the chaotic sequence has higher entropy. Different from the other PSO-based image encryption methods, the proposed method takes the parameters and initial values of the chaotic system as particles instead of encrypted images, which makes it have lower complexity and therefore easier to be applied in real-time scenarios. To validate the optimization framework, this paper designs a new image encryption scheme. The algorithm mainly includes key selection, chaotic sequence preprocessing, block scrambling, expansion, confusion, and diffusion. The key is selected by PSO and brought into the chaotic map, and the generated chaotic sequence is preprocessed. Based on block theory, a new intrablock and interblock scrambling method is designed, which is combined with image expansion to encrypt the image. Subsequently, the confusion and diffusion framework is used as the last step of the encryption process, including row confusion diffusion and column confusion diffusion, which makes security go a step further. Several experimental tests manifest that the scenario has good encryption performance and higher security compared with some popular image encryption methods. Full article
(This article belongs to the Special Issue Pattern Recognition and Machine Learning Applications)
Show Figures

Figure 1

12 pages, 2849 KiB  
Article
A Configurable Accelerator for Keyword Spotting Based on Small-Footprint Temporal Efficient Neural Network
by Keyan He, Dihu Chen and Tao Su
Electronics 2022, 11(16), 2571; https://doi.org/10.3390/electronics11162571 - 17 Aug 2022
Cited by 5 | Viewed by 3200
Abstract
Keyword spotting (KWS) plays a crucial role in human–machine interactions involving smart devices. In recent years, temporal convolutional networks (TCNs) have performed outstandingly with less computational complexity, in comparison with classical convolutional neural network (CNN) methods. However, it remains challenging to achieve a [...] Read more.
Keyword spotting (KWS) plays a crucial role in human–machine interactions involving smart devices. In recent years, temporal convolutional networks (TCNs) have performed outstandingly with less computational complexity, in comparison with classical convolutional neural network (CNN) methods. However, it remains challenging to achieve a trade-off between a small-footprint model and high accuracy for the edge deployment of the KWS system. In this article, we propose a small-footprint model based on a modified temporal efficient neural network (TENet) and a simplified mel-frequency cepstrum coefficient (MFCC) algorithm. With the batch-norm folding and int8 quantization of the network, our model achieves the accuracy of 95.36% on Google Speech Command Dataset (GSCD) with only 18 K parameters and 461 K multiplications. Furthermore, following a hardware/model co-design approach, we propose an optimized dataflow and a configurable hardware architecture for TENet inference. The proposed accelerator implemented on Xilinx zynq 7z020 achieves an energy efficiency of 25.6 GOPS/W and reduces the runtime by 3.1× compared with state-of-the-art work. Full article
(This article belongs to the Section Artificial Intelligence Circuits and Systems (AICAS))
Show Figures

Figure 1

4 pages, 168 KiB  
Editorial
Machine Learning in Electronic and Biomedical Engineering
by Claudio Turchetti and Laura Falaschetti
Electronics 2022, 11(15), 2438; https://doi.org/10.3390/electronics11152438 - 4 Aug 2022
Viewed by 2262
Abstract
In recent years, machine learning (ML) algorithms have become of paramount importance in computer science research, both in the electronic and biomedical fields [...] Full article
(This article belongs to the Special Issue Machine Learning in Electronic and Biomedical Engineering)
14 pages, 418 KiB  
Article
Improving FPGA Based Impedance Spectroscopy Measurement Equipment by Means of HLS Described Neural Networks to Apply Edge AI
by Jorge Fe, Rafael Gadea-Gironés, Jose M. Monzo, Ángel Tebar-Ruiz and Ricardo Colom-Palero
Electronics 2022, 11(13), 2064; https://doi.org/10.3390/electronics11132064 - 30 Jun 2022
Cited by 3 | Viewed by 4164
Abstract
The artificial intelligence (AI) application in instruments such as impedance spectroscopy highlights the difficulty to choose an electronic technology that correctly solves the basic performance problems, adaptation to the context, flexibility, precision, autonomy, and speed of design. Present work demonstrates that FPGAs, in [...] Read more.
The artificial intelligence (AI) application in instruments such as impedance spectroscopy highlights the difficulty to choose an electronic technology that correctly solves the basic performance problems, adaptation to the context, flexibility, precision, autonomy, and speed of design. Present work demonstrates that FPGAs, in conjunction with an optimized high-level synthesis (HLS), allow us to have an efficient connection between the signals sensed by the instrument and the artificial neural network-based AI computing block that will analyze them. State-of-the-art comparisons and experimental results also demonstrate that our designed and developed architectures offer the best compromise between performance, efficiency, and system costs in terms of artificial neural networks implementation. In the present work, computational efficiency above 21 Mps/DSP and power efficiency below 1.24 mW/Mps are achieved. It is important to remark that these results are more relevant because the system can be implemented on a low-cost FPGA. Full article
(This article belongs to the Special Issue Energy-Efficient Processors, Systems, and Their Applications)
Show Figures

Figure 1

17 pages, 1037 KiB  
Article
The Diversification and Enhancement of an IDS Scheme for the Cybersecurity Needs of Modern Supply Chains
by Dimitris Deyannis, Eva Papadogiannaki, Grigorios Chrysos, Konstantinos Georgopoulos and Sotiris Ioannidis
Electronics 2022, 11(13), 1944; https://doi.org/10.3390/electronics11131944 - 22 Jun 2022
Cited by 4 | Viewed by 4754
Abstract
Despite the tremendous socioeconomic importance of supply chains (SCs), security officers and operators are faced with no easy and integrated way for protecting their critical, and interconnected, infrastructures from cyber-attacks. As a result, solutions and methodologies that support the detection of malicious activity [...] Read more.
Despite the tremendous socioeconomic importance of supply chains (SCs), security officers and operators are faced with no easy and integrated way for protecting their critical, and interconnected, infrastructures from cyber-attacks. As a result, solutions and methodologies that support the detection of malicious activity on SCs are constantly researched into and proposed. Hence, this work presents the implementation of a low-cost reconfigurable intrusion detection system (IDS), on the edge, that can be easily integrated into SC networks, thereby elevating the featured levels of security. Specifically, the proposed system offers real-time cybersecurity intrusion detection over high-speed networks and services by offloading elements of the security check workloads on dedicated reconfigurable hardware. Our solution uses a novel framework that implements the Aho–Corasick algorithm on the reconfigurable fabric of a multi-processor system-on-chip (MPSoC), which supports parallel matching for multiple network packet patterns. The initial performance evaluation of this proof-of-concept shows that it holds the potential to outperform existing software-based solutions while unburdening SC nodes from demanding cybersecurity check workloads. The proposed system performance and its efficiency were evaluated using a real-life environment in the context of European Union’s Horizon 2020 research and innovation program, i.e., CYRENE. Full article
(This article belongs to the Special Issue Energy-Efficient Processors, Systems, and Their Applications)
Show Figures

Figure 1

19 pages, 7043 KiB  
Article
Numerical Evaluation of Complex Capacitance Measurement Using Pulse Excitation in Electrical Capacitance Tomography
by Damian Wanta, Oliwia Makowiecka, Waldemar T. Smolik, Jacek Kryszyn, Grzegorz Domański, Mateusz Midura and Przemysław Wróblewski
Electronics 2022, 11(12), 1864; https://doi.org/10.3390/electronics11121864 - 13 Jun 2022
Cited by 7 | Viewed by 3361
Abstract
Electrical capacitance tomography (ECT) is a technique of imaging the distribution of permittivity inside an object under test. Capacitance is measured between the electrodes surrounding the object, and the image is reconstructed from these data by solving the inverse problem. Although both sinusoidal [...] Read more.
Electrical capacitance tomography (ECT) is a technique of imaging the distribution of permittivity inside an object under test. Capacitance is measured between the electrodes surrounding the object, and the image is reconstructed from these data by solving the inverse problem. Although both sinusoidal excitation and pulse excitation are used in the sensing circuit, only the AC method is used to measure both components of complex capacitance. In this article, a novel method of complex capacitance measurement using pulse excitation is proposed for ECT. The real and imaginary components are calculated from digital samples of the integrator response. A pulse shape in the front-end circuit was analyzed using the Laplace transform. The numerical simulations of the electric field inside the imaging volume as well as simulations of a pulse excitation in the front-end circuit were performed. The calculation of real and imaginary components using digital samples of the output signal was verified. The permittivity and conductivity images reconstructed for the test object were presented. The method enables imaging of permittivity and conductivity spatial distributions using capacitively coupled electrodes and may be an alternative measurement method for ECT as well as for electrical impedance tomography. Full article
(This article belongs to the Special Issue Advances in Electrical Capacitance Tomography System)
Show Figures

Figure 1

13 pages, 4565 KiB  
Article
Neuron Circuit Failure and Pattern Learning in Electronic Spiking Neural Networks
by Sumedha Gandharava, Robert C. Ivans, Benjamin R. Etcheverry and Kurtis D. Cantley
Electronics 2022, 11(9), 1392; https://doi.org/10.3390/electronics11091392 - 27 Apr 2022
Viewed by 2995
Abstract
Biological neural networks demonstrate remarkable resilience and the ability to compensate for neuron losses over time. Thus, the effects of neural/synaptic losses in the brain go mostly unnoticed until the loss becomes profound. This study analyses the capacity of electronic spiking networks to [...] Read more.
Biological neural networks demonstrate remarkable resilience and the ability to compensate for neuron losses over time. Thus, the effects of neural/synaptic losses in the brain go mostly unnoticed until the loss becomes profound. This study analyses the capacity of electronic spiking networks to compensate for the sudden, random neuron failure (“death”) due to reliability degradation or other external factors such as exposure to ionizing radiation. Electronic spiking neural networks with memristive synapses are designed to learn spatio-temporal patterns representing 25 or 100-pixel characters. The change in the pattern learning ability of the neural networks is observed as the afferents (input layer neurons) in the network fail/die during network training. Spike-timing-dependent plasticity (STDP) learning behavior is implemented using shaped action potentials with a realistic, non-linear memristor model. This work focuses on three cases: (1) when only neurons participating in the pattern are affected, (2) when non-participating neurons (those that never present spatio-temporal patterns) are disabled, and (3) when random/non-selective neuron death occurs in the network (the most realistic scenario). Case 3 is further analyzed to compare what happens when neuron death occurs over time versus when multiple afferents fail simultaneously. Simulation results emphasize the importance of non-participating neurons during the learning process, concluding that non-participating afferents contribute to improving the learning ability and stability of the neural network. Instantaneous neuron death proves to be more detrimental for the network compared to when afferents fail over time. To a surprising degree, the electronic spiking neural networks can sometimes retain their pattern recognition capability even in the case of significant neuron death. Full article
(This article belongs to the Section Artificial Intelligence Circuits and Systems (AICAS))
Show Figures

Figure 1

37 pages, 2473 KiB  
Article
Universal Reconfigurable Hardware Accelerator for Sparse Machine Learning Predictive Models
by Vuk Vranjkovic, Predrag Teodorovic and Rastislav Struharik
Electronics 2022, 11(8), 1178; https://doi.org/10.3390/electronics11081178 - 8 Apr 2022
Cited by 2 | Viewed by 3341
Abstract
This study presents a universal reconfigurable hardware accelerator for efficient processing of sparse decision trees, artificial neural networks and support vector machines. The main idea is to develop a hardware accelerator that will be able to directly process sparse machine learning models, resulting [...] Read more.
This study presents a universal reconfigurable hardware accelerator for efficient processing of sparse decision trees, artificial neural networks and support vector machines. The main idea is to develop a hardware accelerator that will be able to directly process sparse machine learning models, resulting in shorter inference times and lower power consumption compared to existing solutions. To the author’s best knowledge, this is the first hardware accelerator of this type. Additionally, this is the first accelerator that is capable of processing sparse machine learning models of different types. Besides the hardware accelerator itself, algorithms for induction of sparse decision trees, pruning of support vector machines and artificial neural networks are presented. Such sparse machine learning classifiers are attractive since they require significantly less memory resources for storing model parameters. This results in reduced data movement between the accelerator and the DRAM memory, as well as a reduced number of operations required to process input instances, leading to faster and more energy-efficient processing. This could be of a significant interest in edge-based applications, with severely constrained memory, computation resources and power consumption. The performance of algorithms and the developed hardware accelerator are demonstrated using standard benchmark datasets from the UCI Machine Learning Repository database. The results of the experimental study reveal that the proposed algorithms and presented hardware accelerator are superior when compared to some of the existing solutions. Throughput is increased up to 2 times for decision trees, 2.3 times for support vector machines and 38 times for artificial neural networks. When the processing latency is considered, maximum performance improvement is even higher: up to a 4.4 times reduction for decision trees, a 84.1 times reduction for support vector machines and a 22.2 times reduction for artificial neural networks. Finally, since it is capable of supporting sparse classifiers, the usage of the proposed hardware accelerator leads to a significant reduction in energy spent on DRAM data transfers and a reduction of 50.16% for decision trees, 93.65% for support vector machines and as much as 93.75% for artificial neural networks, respectively. Full article
(This article belongs to the Special Issue Energy-Efficient Processors, Systems, and Their Applications)
Show Figures

Figure 1

17 pages, 6358 KiB  
Article
Smart Cities and Awareness of Sustainable Communities Related to Demand Response Programs: Data Processing with First-Order and Hierarchical Confirmatory Factor Analyses
by Simona-Vasilica Oprea, Adela Bâra, Cristian-Eugen Ciurea and Laura Florentina Stoica
Electronics 2022, 11(7), 1157; https://doi.org/10.3390/electronics11071157 - 6 Apr 2022
Cited by 5 | Viewed by 3192
Abstract
The mentality of electricity consumers is one of the most important entities that must be addressed when dealing with issues in the operation of power systems. Consumers are used to being completely passive, but recently these things have changed as significant progress of [...] Read more.
The mentality of electricity consumers is one of the most important entities that must be addressed when dealing with issues in the operation of power systems. Consumers are used to being completely passive, but recently these things have changed as significant progress of Information and Communication Technologies (ICT) and Internet of Things (IoT) has gained momentum. In this paper, we propose a statistical measurement model using a covariance structure, specifically a first-order confirmatory factor analysis (CFA) using SAS CALIS procedure to identify the factors that could contribute to the change of attitude within energy communities. Furthermore, this research identifies latent constructs and indicates which observed variables load on or measure them. For the simulation, two complex data sets of questionnaires created by the Irish Commission for Energy Regulation (CER) were analyzed, demonstrating the influence of some exogenous variables on the items of the questionnaires. The results revealed that there is a relevant relationship between the social–economic and the behavioral factors and the observed variables. Furthermore, the models provided a good fit to the data, as measured by the performance indicators. Full article
Show Figures

Figure 1

19 pages, 28337 KiB  
Article
Homomorphic Encryption Based Privacy Preservation Scheme for DBSCAN Clustering
by Mingyang Wang, Wenbin Zhao, Kangda Cheng, Zhilu Wu and Jinlong Liu
Electronics 2022, 11(7), 1046; https://doi.org/10.3390/electronics11071046 - 26 Mar 2022
Cited by 6 | Viewed by 3242
Abstract
In this paper, we propose a homomorphic encryption-based privacy protection scheme for DBSCAN clustering to reduce the risk of privacy leakage during data outsourcing computation. For the purpose of encrypting data in practical applications, we propose a variety of data preprocessing methods for [...] Read more.
In this paper, we propose a homomorphic encryption-based privacy protection scheme for DBSCAN clustering to reduce the risk of privacy leakage during data outsourcing computation. For the purpose of encrypting data in practical applications, we propose a variety of data preprocessing methods for different data accuracies. We also propose data preprocessing strategies based on different data precision and different computational overheads. In addition, we also design a protocol to implement the cipher text comparison function between users and cloud servers. Analysis of experimental results indicates that our proposed scheme has high clustering accuracy and can guarantee the privacy and security of the data. Full article
(This article belongs to the Special Issue Analog AI Circuits and Systems)
Show Figures

Figure 1

20 pages, 5566 KiB  
Article
Machine Learning-Based Feature Selection and Classification for the Experimental Diagnosis of Trypanosoma cruzi
by Nidiyare Hevia-Montiel, Jorge Perez-Gonzalez, Antonio Neme and Paulina Haro
Electronics 2022, 11(5), 785; https://doi.org/10.3390/electronics11050785 - 3 Mar 2022
Cited by 7 | Viewed by 3512
Abstract
Chagas disease, caused by the Trypanosoma cruzi (T. cruzi) parasite, is the third most common parasitosis worldwide. Most of the infected subjects can remain asymptomatic without an opportune and early detection or an objective diagnostic is not conducted. Frequently, the disease [...] Read more.
Chagas disease, caused by the Trypanosoma cruzi (T. cruzi) parasite, is the third most common parasitosis worldwide. Most of the infected subjects can remain asymptomatic without an opportune and early detection or an objective diagnostic is not conducted. Frequently, the disease manifests itself after a long time, accompanied by severe heart disease or by sudden death. Thus, the diagnosis is a complex and challenging process where several factors must be considered. In this paper, a novel pipeline is presented integrating temporal data from four modalities (electrocardiography signals, echocardiography images, Doppler spectrum, and ELISA antibody titers), multiple features selection analyses by a univariate analysis and a machine learning-based selection. The method includes an automatic dichotomous classification of animal status (control vs. infected) based on Random Forest, Extremely Randomized Trees, Decision Trees, and Support Vector Machine. The most relevant multimodal attributes found were ELISA (IgGT, IgG1, IgG2a), electrocardiography (SR mean, QT and ST intervals), ascending aorta Doppler signals, and echocardiography (left ventricle diameter during diastole). Concerning automatic classification from selected features, the best accuracy of control vs. acute infection groups was 93.3 ± 13.3% for cross-validation and 100% in the final test; for control vs. chronic infection groups, it was 100% and 100%, respectively. We conclude that the proposed machine learning-based approach can be of help to obtain a robust and objective diagnosis in early T. cruzi infection stages. Full article
(This article belongs to the Special Issue Machine Learning in Electronic and Biomedical Engineering)
Show Figures

Figure 1

10 pages, 647 KiB  
Article
Research on an Urban Low-Altitude Target Detection Method Based on Image Classification
by Haiyan Jin, Yuxin Wu, Guodong Xu and Zhilu Wu
Electronics 2022, 11(4), 657; https://doi.org/10.3390/electronics11040657 - 19 Feb 2022
Cited by 5 | Viewed by 2698
Abstract
With the expansion of the civil UAV (Unmanned Aerial Vehicle) market, UAVs are also increasingly being used in illegal activities such as espionage and snooping on privacy. Therefore, how to effectively control the activities of UAVs in cities has become an urgent problem [...] Read more.
With the expansion of the civil UAV (Unmanned Aerial Vehicle) market, UAVs are also increasingly being used in illegal activities such as espionage and snooping on privacy. Therefore, how to effectively control the activities of UAVs in cities has become an urgent problem to be solved. Considering the urban background and the radar performance of communication signals, a low-altitude target detection scheme based on 5G base stations is proposed in this paper. A 5G signal is used as the external radiation source, the method of transceiver separation is adopted, and the forward-scattered waves are used to complete the detection of UAV. This paper mainly analyzes the principle of forward scattering detection in an urban environment, where the forward-scattered wave of a target is stronger than the backward-reflected wave and contains both height difference and midline height information on the target. Based on the above theory, this paper proposes a forward-scattered wave recognition algorithm based on YOLOv3-FCWImageNet, which transforms the forward-scattered wave recognition problem into a target detection problem and accomplishes the recognition of forward-scattered waves by using the excellent performance of algorithms in the field of image recognition. Simulation results show that FCWImageNet can distinguish two different low-altitude targets effectively, and realize the monitoring and classification of UAVs. Full article
(This article belongs to the Special Issue Analog AI Circuits and Systems)
Show Figures

Figure 1

20 pages, 6328 KiB  
Article
A Run-Time Reconfiguration Method for an FPGA-Based Electrical Capacitance Tomography System
by Damian Wanta, Waldemar T. Smolik, Jacek Kryszyn, Przemysław Wróblewski and Mateusz Midura
Electronics 2022, 11(4), 545; https://doi.org/10.3390/electronics11040545 - 11 Feb 2022
Cited by 7 | Viewed by 4630
Abstract
A desirable feature of an electrical capacitance tomography system is the adaptation possibility to any sensor configuration and measurement mode. A run-time reconfiguration of a system for electrical capacitance tomography is presented. An original mechanism is elaborated to reconfigure, on the fly, a [...] Read more.
A desirable feature of an electrical capacitance tomography system is the adaptation possibility to any sensor configuration and measurement mode. A run-time reconfiguration of a system for electrical capacitance tomography is presented. An original mechanism is elaborated to reconfigure, on the fly, a modular EVT4 system with multiple FPGAs installed. The outlined system architecture is based on FPGA programmable logic devices (Xilinx Spartan) and PicoBlaze soft-core processors. Soft-core processors are used for communication, measurement control and data preprocessing. A novel method of FPGA partial reconfiguration is described, in which a PicoBlaze soft-core processor is used as a reconfiguration controller. Behavioral reconfiguration of the system is obtained by providing run-time access to the program code of a soft-core control processor. The tests using EVT4 hardware and different algorithms for tomographic scanning were performed. A test object was measured using 2D and 3D sensors. The time and resources required for the examined reconfiguration procedure are evaluated. Full article
(This article belongs to the Special Issue Advances in Electrical Capacitance Tomography System)
Show Figures

Figure 1

19 pages, 375 KiB  
Review
Bringing Emotion Recognition Out of the Lab into Real Life: Recent Advances in Sensors and Machine Learning
by Stanisław Saganowski
Electronics 2022, 11(3), 496; https://doi.org/10.3390/electronics11030496 - 8 Feb 2022
Cited by 54 | Viewed by 11777
Abstract
Bringing emotion recognition (ER) out of the controlled laboratory setup into everyday life can enable applications targeted at a broader population, e.g., helping people with psychological disorders, assisting kids with autism, monitoring the elderly, and general improvement of well-being. This work reviews progress [...] Read more.
Bringing emotion recognition (ER) out of the controlled laboratory setup into everyday life can enable applications targeted at a broader population, e.g., helping people with psychological disorders, assisting kids with autism, monitoring the elderly, and general improvement of well-being. This work reviews progress in sensors and machine learning methods and techniques that have made it possible to move ER from the lab to the field in recent years. In particular, the commercially available sensors collecting physiological data, signal processing techniques, and deep learning architectures used to predict emotions are discussed. A survey on existing systems for recognizing emotions in real-life scenarios—their possibilities, limitations, and identified problems—is also provided. The review is concluded with a debate on what challenges need to be overcome in the domain in the near future. Full article
(This article belongs to the Special Issue Machine Learning in Electronic and Biomedical Engineering)
Show Figures

Figure 1

13 pages, 916 KiB  
Article
Bidimensional and Tridimensional Poincaré Maps in Cardiology: A Multiclass Machine Learning Study
by Leandro Donisi, Carlo Ricciardi, Giuseppe Cesarelli, Armando Coccia, Federica Amitrano, Sarah Adamo and Giovanni D’Addio
Electronics 2022, 11(3), 448; https://doi.org/10.3390/electronics11030448 - 2 Feb 2022
Cited by 18 | Viewed by 3288
Abstract
Heart rate is a nonstationary signal and its variation may contain indicators of current disease or warnings about impending cardiac diseases. Hence, heart rate variation analysis has become a noninvasive tool to further study the activities of the autonomic nervous system. In this [...] Read more.
Heart rate is a nonstationary signal and its variation may contain indicators of current disease or warnings about impending cardiac diseases. Hence, heart rate variation analysis has become a noninvasive tool to further study the activities of the autonomic nervous system. In this scenario, the Poincaré plot analysis has proven to be a valuable tool to support cardiac diseases diagnosis. The study’s aim is a preliminary exploration of the feasibility of machine learning to classify subjects belonging to five cardiac states (healthy, hypertension, myocardial infarction, congestive heart failure and heart transplanted) using ten unconventional quantitative parameters extracted from bidimensional and three-dimensional Poincaré maps. Knime Analytic Platform was used to implement several machine learning algorithms: Gradient Boosting, Adaptive Boosting, k-Nearest Neighbor and Naïve Bayes. Accuracy, sensitivity and specificity were computed to assess the performances of the predictive models using the leave-one-out cross-validation. The Synthetic Minority Oversampling technique was previously performed for data augmentation considering the small size of the dataset and the number of features. A feature importance, ranked on the basis of the Information Gain values, was computed. Preliminarily, a univariate statistical analysis was performed through one-way Kruskal Wallis plus post-hoc for all the features. Machine learning analysis achieved interesting results in terms of evaluation metrics, such as demonstrated by Adaptive Boosting and k-Nearest Neighbor (accuracies greater than 90%). Gradient Boosting and k-Nearest Neighbor reached even 100% score in sensitivity and specificity, respectively. The most important features according to information gain are in line with the results obtained from the statistical analysis confirming their predictive power. The study shows the proposed combination of unconventional features extracted from Poincaré maps and well-known machine learning algorithms represents a valuable approach to automatically classify patients with different cardiac diseases. Future investigations on enriched datasets will further confirm the potential application of this methodology in diagnostic. Full article
(This article belongs to the Special Issue Machine Learning in Electronic and Biomedical Engineering)
Show Figures

Figure 1

12 pages, 1681 KiB  
Article
Automatic RTL Generation Tool of FPGAs for DNNs
by Seojin Jang, Wei Liu, Sangun Park and Yongbeom Cho
Electronics 2022, 11(3), 402; https://doi.org/10.3390/electronics11030402 - 28 Jan 2022
Cited by 2 | Viewed by 5084
Abstract
With the increasing use of multi-purpose artificial intelligence of things (AIOT) devices, embedded field-programmable gate arrays (FPGA) represent excellent platforms for deep neural network (DNN) acceleration on edge devices. FPGAs possess the advantages of low latency and high energy efficiency, but the scarcity [...] Read more.
With the increasing use of multi-purpose artificial intelligence of things (AIOT) devices, embedded field-programmable gate arrays (FPGA) represent excellent platforms for deep neural network (DNN) acceleration on edge devices. FPGAs possess the advantages of low latency and high energy efficiency, but the scarcity of FPGA development resources challenges the deployment of DNN-based edge devices. Register-transfer level programming, hardware verification, and precise resource allocation are needed to build a high-performance FPGA accelerator for DNNs. These tasks present a challenge and are time consuming for even experienced hardware developers. Therefore, we propose an automated, collaborative design process employing an automatic design space exploration tool; an automatic DNN engine enables the tool to reshape and parse a DNN model from software to hardware. We also introduce a long short-term memory (LSTM)-based model to predict performance and generate a DNN model that suits the developer requirements automatically. We demonstrate our design scheme with three FPGAs: a zcu104, a zcu102, and a Cyclone V SoC (system on chip). The results show that our hardware-based edge accelerator exhibits superior throughput compared with the most advanced edge graphics processing unit. Full article
Show Figures

Figure 1

13 pages, 1283 KiB  
Communication
Mobility-Aware Hybrid Flow Rule Cache Scheme in Software-Defined Access Networks
by Youngjun Kim, Jinwoo Park and Yeunwoong Kyung
Electronics 2022, 11(1), 160; https://doi.org/10.3390/electronics11010160 - 5 Jan 2022
Cited by 9 | Viewed by 2744
Abstract
Due to the dynamic mobility feature, the proactive flow rule cache method has become one promising solution in software-defined networking (SDN)-based access networks to reduce the number of flow rule installation procedures between the forwarding nodes and SDN controller. However, since there is [...] Read more.
Due to the dynamic mobility feature, the proactive flow rule cache method has become one promising solution in software-defined networking (SDN)-based access networks to reduce the number of flow rule installation procedures between the forwarding nodes and SDN controller. However, since there is a flow rule cache limit for the forwarding node, an efficient flow rule cache strategy is required. To address this challenge, this paper proposes the mobility-aware hybrid flow rule cache scheme. Based on the comparison between the delay requirement of the incoming flow and the response delay of the controller, the proposed scheme decides to install the flow rule either proactively or reactively for the target candidate forwarding nodes. To find the optimal number of proactive flow rules considering the flow rule cache limits, an integer linear programming (ILP) problem is formulated and solved using the heuristic method. Extensive simulation results demonstrate that the proposed scheme outperforms the existing schemes in terms of the flow table utilization ratio, flow rule installation delay, and flow rules hit ratio under various settings. Full article
(This article belongs to the Special Issue Applied AI-Based Platform Technology and Application)
Show Figures

Figure 1

48 pages, 12306 KiB  
Review
A Survey of Recommendation Systems: Recommendation Models, Techniques, and Application Fields
by Hyeyoung Ko, Suyeon Lee, Yoonseo Park and Anna Choi
Electronics 2022, 11(1), 141; https://doi.org/10.3390/electronics11010141 - 3 Jan 2022
Cited by 394 | Viewed by 61679
Abstract
This paper reviews the research trends that link the advanced technical aspects of recommendation systems that are used in various service areas and the business aspects of these services. First, for a reliable analysis of recommendation models for recommendation systems, data mining technology, [...] Read more.
This paper reviews the research trends that link the advanced technical aspects of recommendation systems that are used in various service areas and the business aspects of these services. First, for a reliable analysis of recommendation models for recommendation systems, data mining technology, and related research by application service, more than 135 top-ranking articles and top-tier conferences published in Google Scholar between 2010 and 2021 were collected and reviewed. Based on this, studies on recommendation system models and the technology used in recommendation systems were systematized, and research trends by year were analyzed. In addition, the application service fields where recommendation systems were used were classified, and research on the recommendation system model and recommendation technique used in each field was analyzed. Furthermore, vast amounts of application service-related data used by recommendation systems were collected from 2010 to 2021 without taking the journal ranking into consideration and reviewed along with various recommendation system studies, as well as applied service field industry data. As a result of this study, it was found that the flow and quantitative growth of various detailed studies of recommendation systems interact with the business growth of the actual applied service field. While providing a comprehensive summary of recommendation systems, this study provides insight to many researchers interested in recommendation systems through the analysis of its various technologies and trends in the service field to which recommendation systems are applied. Full article
(This article belongs to the Special Issue Electronic Solutions for Artificial Intelligence Healthcare Volume II)
Show Figures

Graphical abstract

24 pages, 2052 KiB  
Article
Electro-Thermal Model-Based Design of Bidirectional On-Board Chargers in Hybrid and Full Electric Vehicles
by Pierpaolo Dini and Sergio Saponara
Electronics 2022, 11(1), 112; https://doi.org/10.3390/electronics11010112 - 30 Dec 2021
Cited by 34 | Viewed by 5640
Abstract
In this paper, a model-based approach for the design of a bidirectional onboard charger (OBC) device for modern hybrid and fully electrified vehicles is proposed. The main objective and contribution of our study is to incorporate in the same simulation environment both modelling [...] Read more.
In this paper, a model-based approach for the design of a bidirectional onboard charger (OBC) device for modern hybrid and fully electrified vehicles is proposed. The main objective and contribution of our study is to incorporate in the same simulation environment both modelling of electrical and thermal behaviour of switching devices. This is because most (if not all) of the studies in the literature present analyses of thermal behaviour based on the use of FEM (Finite Element Method) SWs, which in fact require the definition of complicated models based on partial derivative equations. The simulation of such accurate models is computationally expensive and, therefore, cannot be incorporated into the same virtual environment in which the circuit equations are solved. This requires long waiting times and also means that electrical and thermal models do not interact with each other, limiting the completeness of the analysis in the design phase. As a case study, we take as reference the architecture of a modular bidirectional single-phase OBC, consisting of a Totem Pole-type AC/DC converter with Power Factor Correction (PFC) followed by a Dual Active Bridge (DAB) type DC/DC converter. Specifically, we consider a 7 kW OBC, for which its modules consist of switching devices made with modern 900 V GaN (Gallium Nitrade) and 1200 V SiC (Silicon Carbide) technologies, to achieve maximum performance and efficiency. We present a procedure for sizing and selecting electronic devices based on the analysis of behaviour through circuit models of the Totem Pole PFC and DAB converter in order to perform validation by using simulations that are as realistic as possible. The developed models are tested under various operating conditions of practical interest in order to validate the robustness of the implemented control algorithms under varying operating conditions. The validation of the models and control loops is also enhanced by an exhaustive robustness analysis of the parametric variations of the model with respect to the nominal case. All simulations obtained respect the operating limits of the selected devices and components, for which its characteristics are reported in data sheets both in terms of electrical and thermal behaviour. Full article
Show Figures

Figure 1

23 pages, 5596 KiB  
Article
Stability Analysis of Power Hardware-in-the-Loop Simulations for Grid Applications
by Simon Resch, Juliane Friedrich, Timo Wagner, Gert Mehlmann and Matthias Luther
Electronics 2022, 11(1), 7; https://doi.org/10.3390/electronics11010007 - 21 Dec 2021
Cited by 19 | Viewed by 4837
Abstract
Power Hardware-in-the-Loop (PHiL) simulation is an emerging testing methodology of real hardware equipment within an emulated virtual environment. The closed loop interfacing between the Hardware under Test (HuT) and the Real Time Simulation (RTS) enables a realistic simulation but can also result in [...] Read more.
Power Hardware-in-the-Loop (PHiL) simulation is an emerging testing methodology of real hardware equipment within an emulated virtual environment. The closed loop interfacing between the Hardware under Test (HuT) and the Real Time Simulation (RTS) enables a realistic simulation but can also result in an unstable system. In addition to fundamentals in PHiL simulation and interfacing, this paper therefore provides a consistent and comprehensive study of PHiL stability. An analytic analysis is compared with a simulative approach and is supplemented by practical validations of the stability limits in PHiL simulation. Special focus is given on the differences between a switching and a linear amplifier as power interface (PI). Stability limits and the respective factors of influence (e.g., Feedback Current Filtering) are elaborated with a minimal example circuit with voltage-type Ideal Transformer Model (ITM) PHiL interface algorithm (IA). Finally, the findings are transferred to a real low-voltage grid PHiL application with residential load and photovoltaic system. Full article
Show Figures

Figure 1

Back to TopTop