AI-Enabled Internet of Things for Engineering Applications

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (20 August 2023) | Viewed by 13577

Special Issue Editors


E-Mail Website
Guest Editor
Gwangju Institute of Science and Technology, Gwangju 61005, Korea
Interests: artificial intelligence; machine learning; computer vision; visual surveillance; autonomous driving
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Software, Korea National University of Transportation, Chungju 27469, Republic of Korea
Interests: computer vision; machine/deep learning; applications in visual surveillance and healthcare
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The synergetic relationship between Artificial Intelligence (AI) and Internet of Things (IoT) enables disruptive innovations in wearables and implantable devices for numerous engineering applications. The three key components of this emerging era of AI and IoT applications are (i) intelligent sensors, information and knowledge, (ii) intelligent systems-of-systems, and (iii) advanced end-to-end analytics. There are several research challenges in implementing AI-enabled IoT systems and applications. From a system and application front, there is a need to design intelligent and scalable data solutions and analytics facilitated by collaborative sensing and collaborative machine learning algorithms. For this reason, this Special Issue aims to solicit submissions of unpublished and original research articles that present in-depth fundamental research contributions from a methodological or theoretical application perspective containing novel algorithms, architectures, techniques, or systems that offer new insights and findings in the field of AI-empowered IoT.

  • Computing for IoT data processing;
  • Social data mining and computing;
  • Data mining tools and platforms;
  •  Applications for AI-empowered competent IoT services;
  • Collective machine learning for AI-empowered IoT systems;
  • Privacy and security of AI-empowered IoT solutions;
  • Edge AI for human–computer interaction and human-centric IoT systems; 
  • Intelligent edge IoT devices for biomedical, surveillance, and other industrial applications;
  • Stream processing for efficient IoT data processing;
  • 5G-assisted IoT techniques and applications;
  • Blockchain and IoT systems and applications;
  • Evolutionary algorithms for IoT and wearable systems and applications;
  • Modeling and simulation of large-scale IoT scenarios and IoT standardization;
  • Hybrid approaches and emerging real-world applications of AI-empowered IoT in healthcare;
  • Techniques, tools, and infrastructure that support the development and deployment of AI-empowered IoT systems;
  • Novel architectures, infrastructures, and protocols for AI-empowered IoT systems;
  • Remote healthcare and patient activity monitoring based on AI-empowered IoT  technologies;
  • Energy-efficient AI and IoT applications and data analytics technique;
  • Techniques for AI-empowered IoT applications for visual surveillance.

Prof. Dr. Moongu Jeon
Prof. Dr. Jeonghwan Gwak
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • edge AI
  • internet of things
  • machine learning
  • deep learning
  • collaborative learning
  • AI systems and applications

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 10033 KiB  
Article
Keypoint-Based Automated Component Placement Inspection for Printed Circuit Boards
by Si-Tung Chung, Wen-Jyi Hwang and Tsung-Ming Tai
Appl. Sci. 2023, 13(17), 9863; https://doi.org/10.3390/app13179863 - 31 Aug 2023
Cited by 1 | Viewed by 675
Abstract
This study aims to develop novel automated computer vision algorithms and systems for component replacement inspection for printed circuit boards (PCBs). The proposed algorithms are able to identify the locations and sizes of different components. They are object detection algorithms based on key [...] Read more.
This study aims to develop novel automated computer vision algorithms and systems for component replacement inspection for printed circuit boards (PCBs). The proposed algorithms are able to identify the locations and sizes of different components. They are object detection algorithms based on key points of the target components. The algorithms can be implemented as neural networks consisting of two portions: frontend networks and backend networks. The frontend networks are used for the feature extractions of input images. The backend networks are adopted to produce component inspection results. Each component class can has its own frontend and backend networks. In this way, the neural model for the component class can be effectively reused for different PCBs. To reduce the computation time for the inference of the networks, different component classes can share the same frontend networks. A two-stage training process is proposed to effectively explore features of different components for accurate component inspection. The proposed algorithm has the advantages of simplicity in training for data collection, high accuracy in defect detection, and high reusability and flexibility for online inspection. The algorithm is an effective alternative for automated inspection in smart factories, with growing demand for product quality and diversification. Full article
(This article belongs to the Special Issue AI-Enabled Internet of Things for Engineering Applications)
Show Figures

Figure 1

15 pages, 2780 KiB  
Article
Contactless Real-Time Eye Gaze-Mapping System Based on Simple Siamese Networks
by Hoyeon Ahn, Jiwon Jeon, Donghwuy Ko, Jeonghwan Gwak and Moongu Jeon
Appl. Sci. 2023, 13(9), 5374; https://doi.org/10.3390/app13095374 - 25 Apr 2023
Cited by 1 | Viewed by 1251
Abstract
Human–computer interaction (HCI) is a multidisciplinary field that investigates the interactions between humans and computer systems. HCI has facilitated the development of various digital technologies that aim to deliver optimal user experiences. Gaze recognition is a critical aspect of HCI, as it can [...] Read more.
Human–computer interaction (HCI) is a multidisciplinary field that investigates the interactions between humans and computer systems. HCI has facilitated the development of various digital technologies that aim to deliver optimal user experiences. Gaze recognition is a critical aspect of HCI, as it can provide valuable insights into basic human behavior. The gaze-matching method is a reliable approach that can identify the area at which a user is looking. Early methods of gaze tracking required users to wear glasses with a tracking function and limited tracking to a small monitoring area. Additionally, gaze estimation was restricted to a fixed posture within a narrow range. In this study, we proposed a novel non-contact gaze-mapping system that could overcome the physical limitations of previous methods and be applied in real-world environments. Our experimental results demonstrated an average gaze-mapping accuracy of 92.9% across 9 different test environments. Moreover, we introduced the GIST gaze-mapping (GGM) dataset, which served as a valuable resource for learning and evaluating gaze-mapping techniques. Full article
(This article belongs to the Special Issue AI-Enabled Internet of Things for Engineering Applications)
Show Figures

Figure 1

18 pages, 760 KiB  
Article
Lightweight Model for Botnet Attack Detection in Software Defined Network-Orchestrated IoT
by Worku Gachena Negera, Friedhelm Schwenker, Taye Girma Debelee, Henock Mulugeta Melaku and Degaga Wolde Feyisa
Appl. Sci. 2023, 13(8), 4699; https://doi.org/10.3390/app13084699 - 7 Apr 2023
Cited by 4 | Viewed by 1884
Abstract
The Internet of things (IoT) is being used in a variety of industries, including agriculture, the military, smart cities and smart grids, and personalized health care. It is also being used to control critical infrastructure. Nevertheless, because the IoT lacks security procedures and [...] Read more.
The Internet of things (IoT) is being used in a variety of industries, including agriculture, the military, smart cities and smart grids, and personalized health care. It is also being used to control critical infrastructure. Nevertheless, because the IoT lacks security procedures and lack the processing power to execute computationally costly antimalware apps, they are susceptible to malware attacks. In addition, the conventional method by which malware-detection mechanisms identify a threat is through known malware fingerprints stored in their database. However, with the ever-evolving and drastic increase in malware threats in the IoT, it is not enough to have traditional antimalware software in place, which solely defends against known threats. Consequently, in this paper, a lightweight deep learning model for an SDN-enabled IoT framework that leverages the underlying IoT resource-constrained devices by provisioning computing resources to deploy instant protection against botnet malware attacks is proposed. The proposed model can achieve 99% precision, recall, and F1 score and 99.4% accuracy. The execution time of the model is 0.108 milliseconds with 118 KB size and 19,414 parameters. The proposed model can achieve performance with high accuracy while utilizing fewer computational resources and addressing resource-limitation issues. Full article
(This article belongs to the Special Issue AI-Enabled Internet of Things for Engineering Applications)
Show Figures

Figure 1

20 pages, 4962 KiB  
Article
Federated Learning-Based Analysis of Human Sentiments and Physical Activities in Natural Disasters
by Muhammad Sadiq Amin and Woong-Kee Loh
Appl. Sci. 2023, 13(5), 2925; https://doi.org/10.3390/app13052925 - 24 Feb 2023
Viewed by 1434
Abstract
In federated learning (FL), in addition to the training and speculating capacities of the global and local models, an appropriately annotated dataset is equally crucial. These datasets rely on annotation procedures that are error prone and laborious, which require personal inspection for training [...] Read more.
In federated learning (FL), in addition to the training and speculating capacities of the global and local models, an appropriately annotated dataset is equally crucial. These datasets rely on annotation procedures that are error prone and laborious, which require personal inspection for training the overall dataset. In this study, we evaluate the effect of unlabeled data supplied by every participating node in active learning (AL) on the FL. We propose an AL-empowered FL paradigm that combines two application scenarios and assesses different AL techniques. We demonstrate the efficacy of AL by attaining equivalent performance in both centralized and FL with well-annotated data, utilizing limited data images with reduced human assistance during the annotation of the training sets. We establish that the proposed method is independent of the datasets and applications by assessing it using two distinct datasets and applications, human sentiments and human physical activities during natural disasters. We achieved viable results on both application domains that were relatively comparable to the optimal case, in which every data image was manually annotated and assessed (criterion 1). Consequently, a significant improvement of 5.5–6.7% was achieved using the active learning approaches on the training sets of the two datasets, which contained irrelevant images. Full article
(This article belongs to the Special Issue AI-Enabled Internet of Things for Engineering Applications)
Show Figures

Figure 1

15 pages, 2255 KiB  
Article
Content-Adaptive and Attention-Based Network for Hand Gesture Recognition
by Zongjing Cao, Yan Li and Byeong-Seok Shin
Appl. Sci. 2022, 12(4), 2041; https://doi.org/10.3390/app12042041 - 16 Feb 2022
Cited by 8 | Viewed by 2177
Abstract
For hand gesture recognition, recurrent neural networks and 3D convolutional neural networks are the most commonly used methods for learning the spatial–temporal features of gestures. The calculation of the hidden state of the recurrent neural network at a specific time is determined by [...] Read more.
For hand gesture recognition, recurrent neural networks and 3D convolutional neural networks are the most commonly used methods for learning the spatial–temporal features of gestures. The calculation of the hidden state of the recurrent neural network at a specific time is determined by both input at the current time and the output of the hidden state at the previous time, therefore limiting its parallel computation. The large number of weight parameters that need to be optimized leads to high computational costs associated with 3D convolution-based methods. We introduced a transformer-based network for hand gesture recognition, which is a completely self-attentional architecture without any convolution or recurrent layers. The framework classifies hand gestures by focusing on the sequence information of the whole gesture video. In addition, we introduced an adaptive sampling strategy based on the video content to reduce the input of gesture-free frames to the model, thus reducing computational consumption. The proposed network achieved 83.2% and 93.8% recognition accuracy on two publicly available benchmark datasets, NVGesture and EgoGesture datasets, respectively. The results of extensive comparison experiments show that our proposed approach outperforms the existing state-of-the-art gesture recognition systems. Full article
(This article belongs to the Special Issue AI-Enabled Internet of Things for Engineering Applications)
Show Figures

Figure 1

10 pages, 1830 KiB  
Article
Direct Rating Estimation of Enlarged Perivascular Spaces (EPVS) in Brain MRI Using Deep Neural Network
by Ehwa Yang, Venkateswarlu Gonuguntla, Won-Jin Moon, Yeonsil Moon, Hee-Jin Kim, Mina Park and Jae-Hun Kim
Appl. Sci. 2021, 11(20), 9398; https://doi.org/10.3390/app11209398 - 11 Oct 2021
Cited by 3 | Viewed by 2098
Abstract
In this article, we propose a deep-learning-based estimation model for rating enlarged perivascular spaces (EPVS) in the brain’s basal ganglia region using T2-weighted magnetic resonance imaging (MRI) images. The proposed method estimates the EPVS rating directly from the T2-weighted MRI without using either [...] Read more.
In this article, we propose a deep-learning-based estimation model for rating enlarged perivascular spaces (EPVS) in the brain’s basal ganglia region using T2-weighted magnetic resonance imaging (MRI) images. The proposed method estimates the EPVS rating directly from the T2-weighted MRI without using either the detection or the segmentation of EVPS. The model uses the cropped basal ganglia region on the T2-weighted MRI. We formulated the rating of EPVS as a multi-class classification problem. Model performance was evaluated using 96 subjects’ T2-weighted MRI data that were collected from two hospitals. The results show that the proposed method can automatically rate EPVS—demonstrating great potential to be used as a risk indicator of dementia to aid early diagnosis. Full article
(This article belongs to the Special Issue AI-Enabled Internet of Things for Engineering Applications)
Show Figures

Figure 1

12 pages, 570 KiB  
Article
Anomaly Detection of the Brake Operating Unit on Metro Vehicles Using a One-Class LSTM Autoencoder
by Jaeyong Kang, Chul-Su Kim, Jeong Won Kang and Jeonghwan Gwak
Appl. Sci. 2021, 11(19), 9290; https://doi.org/10.3390/app11199290 - 6 Oct 2021
Cited by 19 | Viewed by 2890
Abstract
Detecting anomalies in the Brake Operating Unit (BOU) braking system of metro trains is very important for trains’ reliability and safety. However, current periodic maintenance and inspection cannot detect anomalies at an early stage. In addition, constructing a stable and accurate anomaly detection [...] Read more.
Detecting anomalies in the Brake Operating Unit (BOU) braking system of metro trains is very important for trains’ reliability and safety. However, current periodic maintenance and inspection cannot detect anomalies at an early stage. In addition, constructing a stable and accurate anomaly detection system is a very challenging task. Hence, in this work, we propose a method for detecting anomalies of BOU on metro vehicles using a one-class long short-term memory (LSTM) autoencoder. First, we extracted brake cylinder (BC) pressure data from the BOU data since one of the anomaly cases of metro trains is that BC pressure relief time is delayed by 4 s. After that, extracted BC pressure data is split into subsequences which are fed into our proposed one-class LSTM autoencoder which consists of two LSTM blocks (encoder and decoder). The one-class LSTM autoencoder is trained using training data which only consists of normal subsequences. To detect anomalies from test data that contain abnormal subsequences, the mean absolute error (MAE) for each subsequence is calculated. When the error is larger than a predefined threshold which was set to the maximum value of MAE in the training (normal) dataset, we can declare that example an anomaly. We conducted the experiments with the BOU data of metro trains in Korea. Experimental results show that our proposed method can detect anomalies of the BOU data well. Full article
(This article belongs to the Special Issue AI-Enabled Internet of Things for Engineering Applications)
Show Figures

Figure 1

Back to TopTop