sensors-logo

Journal Browser

Journal Browser

Special Issue "Artificial Neural Networks for IoT-Enabled Smart Applications"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Internet of Things".

Deadline for manuscript submissions: 15 April 2023 | Viewed by 11404

Special Issue Editors

Dr. Andrei Velichko
E-Mail Website
Guest Editor
Institute of Physics and Technology, Petrozavodsk State University, 31 Lenina Str., 185910 Petrozavodsk, Russia
Interests: neural networks; constrained devices; IoT; reservoir computing; ambient intelligence; synchronization of coupled oscillators; switching effect
Special Issues, Collections and Topics in MDPI journals
Prof. Dr. Dmitry Korzun
E-Mail Website
Guest Editor
Department of Computer Science, Institute of Mathematics and Information Technology, Petrozavodsk State University, 31 Lenina Str., 185910 Petrozavodsk, Russia
Interests: ambient intelligence; smart spaces; Internet of Things; networking; mathematical modeling; performance evaluation; data mining; information services; industrial internet; socio-cyber-physical systems; software engineering
Special Issues, Collections and Topics in MDPI journals
Prof. Dr. Alexander Meigal
E-Mail Website
Guest Editor
Department of Human and Animal Physiology, Laboratory of Novel methods in Physiology, Institute of Higher Biomedical Technologies, Medical Institute, Petrozavodsk State University, 31 Lenina Str., 185910 Petrozavodsk, Russia
Interests: nonlinear dynamics of biosignals; motion analysis; extremal environments; microgravity; ageing; parkinsonism; ambient intelligence; smart spaces; Internet of Things; socio-cyberphysical systems

Special Issue Information

Dear Colleagues,

In the age of neural networks and the Internet of Things (IoT), the search for new neural network architectures capable of operating on devices with limited computing power and small memory size is becoming an urgent agenda. Various artificial intelligence (AI) applications in the IoT field include smart healthcare services, smart agriculture, smart environment monitoring, smart exploration, and smart disaster rescue. Traditionally, such applications operate in real time. For example, security camera-based object-recognition tasks operate with detection intervals of 500 ms to capture and respond to target events. Data processing of human health and physiological parameters from different sensors (heartrate monitoring, glucose monitoring, oxygen saturation, etc.) generally requires immediate processing. Often, commercial smart IoT devices transfer information to the cloud for subsequent intelligent processing. However, stable network connections are not available everywhere, which is a limitation for meeting real-time requirements. The solution to this problem can be the execution of information processing using neural networks installed directly on IoT devices. In this case, the quality of the Internet connection would not have a significant impact. Enabling artificial intelligence directly on the device is difficult due to the limited computing power and small memory size of IoT devices. Frequently, smart applications need to run on a lightweight OS with a minimal set of libraries, which imposes limitations on the operation of resource-intensive neural networks.

AI technologies for IoT devices and edge computing are demanded in mobile healthcare (m-Health), as well as in close application domains. Ambient intelligence (AmI) environments are constructed in IoT environments to provide smart services for people based on real-time analysis of human cognitive and motion functions. Examples include but are not limited to:

  • At-home labs for people to make medical observations and analysis during everyday life, not in specific and limited conditions of a professional medical lab at hospital;
  • Applications for Industrial Internet when real-time monitoring of human movement and health parameters supports detection of dangerous situations and incorrect activity in technological operation;
  • Tactile Internet with its demand in bionic applications when a person can “touch and perceive” distant physical objects or even virtual (digital) objects through the Internet.

This Special Issue is dedicated to recent developments in the constantly growing application field of computing technologies and artificial intelligence algorithms and includes new approaches to the organization of artificial intelligence on peripheral devices, organization of modular, feed forward, distributed, reservoir, recurrent, convolutional and deep neural networks for various IoT-enabled smart applications.

Dr. Andrei Velichko
Prof. Dr. Dmitry Korzun
Dr. Alexander Meigal
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • IoT environment
  • Internet of Medical Devices
  • healthcare-monitoring devices
  • IoT-based healthcare services
  • smart IoT-based agriculture
  • smart IoT-based environment monitoring
  • smart IoT-based exploration
  • smart IoT-based disaster rescue
  • edge computing
  • mobile computing
  • constrained devices
  • deep neural network
  • distributed neural networks
  • feed forward neural network
  • convolutional neural network
  • recurrent neural network
  • reservoir neural network
  • modular neural network

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

Article
Development of an Artificial Neural Network Algorithm Embedded in an On-Site Sensor for Water Level Forecasting
Sensors 2022, 22(21), 8532; https://doi.org/10.3390/s22218532 - 05 Nov 2022
Viewed by 356
Abstract
Extreme weather events cause stream overflow and lead to urban inundation. In this study, a decentralized flood monitoring system is proposed to provide water level predictions in streams three hours ahead. The customized sensor in the system measures the water levels and implements [...] Read more.
Extreme weather events cause stream overflow and lead to urban inundation. In this study, a decentralized flood monitoring system is proposed to provide water level predictions in streams three hours ahead. The customized sensor in the system measures the water levels and implements edge computing to produce future water levels. It is very different from traditional centralized monitoring systems and considered an innovation in the field. In edge computing, traditional physics-based algorithms are not computationally efficient if microprocessors are used in sensors. A correlation analysis was performed to identify key factors that influence the variations in the water level forecasts. For example, the second-order difference in the water level is considered to represent the acceleration or deacceleration of a water level rise. According to different input factors, three artificial neural network (ANN) models were developed. Four streams or canals were selected to test and evaluate the performance of the models. One case was used for model training and testing, and the others were used for model validation. The results demonstrated that the ANN model with the second-order water level difference as an input factor outperformed the other ANN models in terms of RMSE. The customized microprocessor-based sensor with an embedded ANN algorithm can be adopted to improve edge computing capabilities and support emergency response and decision making. Full article
(This article belongs to the Special Issue Artificial Neural Networks for IoT-Enabled Smart Applications)
Show Figures

Figure 1

Article
Predicting Chemical Carcinogens Using a Hybrid Neural Network Deep Learning Method
Sensors 2022, 22(21), 8185; https://doi.org/10.3390/s22218185 - 26 Oct 2022
Cited by 1 | Viewed by 414
Abstract
Determining environmental chemical carcinogenicity is urgently needed as humans are increasingly exposed to these chemicals. In this study, we developed a hybrid neural network (HNN) method called HNN-Cancer to predict potential carcinogens of real-life chemicals. The HNN-Cancer included a new SMILES feature representation [...] Read more.
Determining environmental chemical carcinogenicity is urgently needed as humans are increasingly exposed to these chemicals. In this study, we developed a hybrid neural network (HNN) method called HNN-Cancer to predict potential carcinogens of real-life chemicals. The HNN-Cancer included a new SMILES feature representation method by modifying our previous 3D array representation of 1D SMILES simulated by the convolutional neural network (CNN). We developed binary classification, multiclass classification, and regression models based on diverse non-congeneric chemicals. Along with the HNN-Cancer model, we developed models based on the random forest (RF), bootstrap aggregating (Bagging), and adaptive boosting (AdaBoost) methods for binary and multiclass classification. We developed regression models using HNN-Cancer, RF, support vector regressor (SVR), gradient boosting (GB), kernel ridge (KR), decision tree with AdaBoost (DT), KNeighbors (KN), and a consensus method. The performance of the models for all classifications was assessed using various statistical metrics. The accuracy of the HNN-Cancer, RF, and Bagging models were 74%, and their AUC was ~0.81 for binary classification models developed with 7994 chemicals. The sensitivity was 79.5% and the specificity was 67.3% for the HNN-Cancer, which outperforms the other methods. In the case of multiclass classification models with 1618 chemicals, we obtained the optimal accuracy of 70% with an AUC 0.7 for HNN-Cancer, RF, Bagging, and AdaBoost, respectively. In the case of regression models, the correlation coefficient (R) was around 0.62 for HNN-Cancer and RF higher than the SVM, GB, KR, DTBoost, and NN machine learning methods. Overall, the HNN-Cancer performed better for the majority of the known carcinogen experimental datasets. Further, the predictive performance of HNN-Cancer on diverse chemicals is comparable to the literature-reported models that included similar and less diverse molecules. Our HNN-Cancer could be used in identifying potentially carcinogenic chemicals for a wide variety of chemical classes. Full article
(This article belongs to the Special Issue Artificial Neural Networks for IoT-Enabled Smart Applications)
Show Figures

Figure 1

Article
Gait Characteristics Analyzed with Smartphone IMU Sensors in Subjects with Parkinsonism under the Conditions of “Dry” Immersion
Sensors 2022, 22(20), 7915; https://doi.org/10.3390/s22207915 - 18 Oct 2022
Viewed by 494
Abstract
Parkinson’s disease (PD) is increasingly being studied using science-intensive methods due to economic, medical, rehabilitation and social reasons. Wearable sensors and Internet of Things-enabled technologies look promising for monitoring motor activity and gait in PD patients. In this study, we sought to evaluate [...] Read more.
Parkinson’s disease (PD) is increasingly being studied using science-intensive methods due to economic, medical, rehabilitation and social reasons. Wearable sensors and Internet of Things-enabled technologies look promising for monitoring motor activity and gait in PD patients. In this study, we sought to evaluate gait characteristics by analyzing the accelerometer signal received from a smartphone attached to the head during an extended TUG test, before and after single and repeated sessions of terrestrial microgravity modeled with the condition of “dry” immersion (DI) in five subjects with PD. The accelerometer signal from IMU during walking phases of the TUG test allowed for the recognition and characterization of up to 35 steps. In some patients with PD, unusually long steps have been identified, which could potentially have diagnostic value. It was found that after one DI session, stepping did not change, though in one subject it significantly improved (cadence, heel strike and step length). After a course of DI sessions, some characteristics of the TUG test improved significantly. In conclusion, the use of accelerometer signals received from a smartphone IMU looks promising for the creation of an IoT-enabled system to monitor gait in subjects with PD. Full article
(This article belongs to the Special Issue Artificial Neural Networks for IoT-Enabled Smart Applications)
Show Figures

Figure 1

Article
Machine Learning Sensors for Diagnosis of COVID-19 Disease Using Routine Blood Values for Internet of Things Application
Sensors 2022, 22(20), 7886; https://doi.org/10.3390/s22207886 - 17 Oct 2022
Cited by 1 | Viewed by 890
Abstract
Healthcare digitalization requires effective applications of human sensors, when various parameters of the human body are instantly monitored in everyday life due to the Internet of Things (IoT). In particular, machine learning (ML) sensors for the prompt diagnosis of COVID-19 are an important [...] Read more.
Healthcare digitalization requires effective applications of human sensors, when various parameters of the human body are instantly monitored in everyday life due to the Internet of Things (IoT). In particular, machine learning (ML) sensors for the prompt diagnosis of COVID-19 are an important option for IoT application in healthcare and ambient assisted living (AAL). Determining a COVID-19 infected status with various diagnostic tests and imaging results is costly and time-consuming. This study provides a fast, reliable and cost-effective alternative tool for the diagnosis of COVID-19 based on the routine blood values (RBVs) measured at admission. The dataset of the study consists of a total of 5296 patients with the same number of negative and positive COVID-19 test results and 51 routine blood values. In this study, 13 popular classifier machine learning models and the LogNNet neural network model were exanimated. The most successful classifier model in terms of time and accuracy in the detection of the disease was the histogram-based gradient boosting (HGB) (accuracy: 100%, time: 6.39 sec). The HGB classifier identified the 11 most important features (LDL, cholesterol, HDL-C, MCHC, triglyceride, amylase, UA, LDH, CK-MB, ALP and MCH) to detect the disease with 100% accuracy. In addition, the importance of single, double and triple combinations of these features in the diagnosis of the disease was discussed. We propose to use these 11 features and their binary combinations as important biomarkers for ML sensors in the diagnosis of the disease, supporting edge computing on Arduino and cloud IoT service. Full article
(This article belongs to the Special Issue Artificial Neural Networks for IoT-Enabled Smart Applications)
Show Figures

Figure 1

Article
Deep Learning and 5G and Beyond for Child Drowning Prevention in Swimming Pools
Sensors 2022, 22(19), 7684; https://doi.org/10.3390/s22197684 - 10 Oct 2022
Viewed by 584
Abstract
Drowning is a major health issue worldwide. The World Health Organization’s global report on drowning states that the highest rates of drowning deaths occur among children aged 1–4 years, followed by children aged 5–9 years. Young children can drown silently in as little [...] Read more.
Drowning is a major health issue worldwide. The World Health Organization’s global report on drowning states that the highest rates of drowning deaths occur among children aged 1–4 years, followed by children aged 5–9 years. Young children can drown silently in as little as 25 s, even in the shallow end or in a baby pool. The report also identifies that the main risk factor for children drowning is the lack of or inadequate supervision. Therefore, in this paper, we propose a novel 5G and beyond child drowning prevention system based on deep learning that detects and classifies distractions of inattentive parents or caregivers and alerts them to focus on active child supervision in swimming pools. In this proposal, we have generated our own dataset, which consists of images of parents/caregivers watching the children or being distracted. The proposed model can successfully perform a seven-class classification with very high accuracies (98%, 94%, and 90% for each model, respectively). ResNet-50, compared with the other models, performs better classifications for most classes. Full article
(This article belongs to the Special Issue Artificial Neural Networks for IoT-Enabled Smart Applications)
Show Figures

Figure 1

Article
Diagnosis and Prognosis of COVID-19 Disease Using Routine Blood Values and LogNNet Neural Network
Sensors 2022, 22(13), 4820; https://doi.org/10.3390/s22134820 - 25 Jun 2022
Cited by 3 | Viewed by 874
Abstract
Since February 2020, the world has been engaged in an intense struggle with the COVID-19 disease, and health systems have come under tragic pressure as the disease turned into a pandemic. The aim of this study is to obtain the most effective routine [...] Read more.
Since February 2020, the world has been engaged in an intense struggle with the COVID-19 disease, and health systems have come under tragic pressure as the disease turned into a pandemic. The aim of this study is to obtain the most effective routine blood values (RBV) in the diagnosis and prognosis of COVID-19 using a backward feature elimination algorithm for the LogNNet reservoir neural network. The first dataset in the study consists of a total of 5296 patients with the same number of negative and positive COVID-19 tests. The LogNNet-model achieved the accuracy rate of 99.5% in the diagnosis of the disease with 46 features and the accuracy of 99.17% with only mean corpuscular hemoglobin concentration, mean corpuscular hemoglobin, and activated partial prothrombin time. The second dataset consists of a total of 3899 patients with a diagnosis of COVID-19 who were treated in hospital, of which 203 were severe patients and 3696 were mild patients. The model reached the accuracy rate of 94.4% in determining the prognosis of the disease with 48 features and the accuracy of 82.7% with only erythrocyte sedimentation rate, neutrophil count, and C reactive protein features. Our method will reduce the negative pressures on the health sector and help doctors to understand the pathogenesis of COVID-19 using the key features. The method is promising to create mobile health monitoring systems in the Internet of Things. Full article
(This article belongs to the Special Issue Artificial Neural Networks for IoT-Enabled Smart Applications)
Show Figures

Figure 1

Article
CNN Architectures and Feature Extraction Methods for EEG Imaginary Speech Recognition
Sensors 2022, 22(13), 4679; https://doi.org/10.3390/s22134679 - 21 Jun 2022
Cited by 2 | Viewed by 761
Abstract
Speech is a complex mechanism allowing us to communicate our needs, desires and thoughts. In some cases of neural dysfunctions, this ability is highly affected, which makes everyday life activities that require communication a challenge. This paper studies different parameters of an intelligent [...] Read more.
Speech is a complex mechanism allowing us to communicate our needs, desires and thoughts. In some cases of neural dysfunctions, this ability is highly affected, which makes everyday life activities that require communication a challenge. This paper studies different parameters of an intelligent imaginary speech recognition system to obtain the best performance according to the developed method that can be applied to a low-cost system with limited resources. In developing the system, we used signals from the Kara One database containing recordings acquired for seven phonemes and four words. We used in the feature extraction stage a method based on covariance in the frequency domain that performed better compared to the other time-domain methods. Further, we observed the system performance when using different window lengths for the input signal (0.25 s, 0.5 s and 1 s) to highlight the importance of the short-term analysis of the signals for imaginary speech. The final goal being the development of a low-cost system, we studied several architectures of convolutional neural networks (CNN) and showed that a more complex architecture does not necessarily lead to better results. Our study was conducted on eight different subjects, and it is meant to be a subject’s shared system. The best performance reported in this paper is up to 37% accuracy for all 11 different phonemes and words when using cross-covariance computed over the signal spectrum of a 0.25 s window and a CNN containing two convolutional layers with 64 and 128 filters connected to a dense layer with 64 neurons. The final system qualifies as a low-cost system using limited resources for decision-making and having a running time of 1.8 ms tested on an AMD Ryzen 7 4800HS CPU. Full article
(This article belongs to the Special Issue Artificial Neural Networks for IoT-Enabled Smart Applications)
Show Figures

Figure 1

Article
E2DR: A Deep Learning Ensemble-Based Driver Distraction Detection with Recommendations Model
Sensors 2022, 22(5), 1858; https://doi.org/10.3390/s22051858 - 26 Feb 2022
Cited by 3 | Viewed by 1657
Abstract
The increasing number of car accidents is a significant issue in current transportation systems. According to the World Health Organization (WHO), road accidents are the eighth highest top cause of death around the world. More than 80% of road accidents are caused by [...] Read more.
The increasing number of car accidents is a significant issue in current transportation systems. According to the World Health Organization (WHO), road accidents are the eighth highest top cause of death around the world. More than 80% of road accidents are caused by distracted driving, such as using a mobile phone, talking to passengers, and smoking. A lot of efforts have been made to tackle the problem of driver distraction; however, no optimal solution is provided. A practical approach to solving this problem is implementing quantitative measures for driver activities and designing a classification system that detects distracting actions. In this paper, we have implemented a portfolio of various ensemble deep learning models that have been proven to efficiently classify driver distracted actions and provide an in-car recommendation to minimize the level of distractions and increase in-car awareness for improved safety. This paper proposes E2DR, a new scalable model that uses stacking ensemble methods to combine two or more deep learning models to improve accuracy, enhance generalization, and reduce overfitting, with real-time recommendations. The highest performing E2DR variant, which included the ResNet50 and VGG16 models, achieved a test accuracy of 92% as applied to state-of-the-art datasets, including the State Farm Distracted Drivers dataset, using novel data splitting strategies. Full article
(This article belongs to the Special Issue Artificial Neural Networks for IoT-Enabled Smart Applications)
Show Figures

Figure 1

Article
Deep Learning Empowered Wearable-Based Behavior Recognition for Search and Rescue Dogs
Sensors 2022, 22(3), 993; https://doi.org/10.3390/s22030993 - 27 Jan 2022
Cited by 6 | Viewed by 2153
Abstract
Search and Rescue (SaR) dogs are important assets in the hands of first responders, as they have the ability to locate the victim even in cases where the vision and or the sound is limited, due to their inherent talents in olfactory and [...] Read more.
Search and Rescue (SaR) dogs are important assets in the hands of first responders, as they have the ability to locate the victim even in cases where the vision and or the sound is limited, due to their inherent talents in olfactory and auditory senses. In this work, we propose a deep-learning-assisted implementation incorporating a wearable device, a base station, a mobile application, and a cloud-based infrastructure that can first monitor in real-time the activity, the audio signals, and the location of a SaR dog, and second, recognize and alert the rescuing team whenever the SaR dog spots a victim. For this purpose, we employed deep Convolutional Neural Networks (CNN) both for the activity recognition and the sound classification, which are trained using data from inertial sensors, such as 3-axial accelerometer and gyroscope and from the wearable’s microphone, respectively. The developed deep learning models were deployed on the wearable device, while the overall proposed implementation was validated in two discrete search and rescue scenarios, managing to successfully spot the victim (i.e., obtained F1-score more than 99%) and inform the rescue team in real-time for both scenarios. Full article
(This article belongs to the Special Issue Artificial Neural Networks for IoT-Enabled Smart Applications)
Show Figures

Figure 1

Review

Jump to: Research, Other

Review
Deep Learning for LiDAR Point Cloud Classification in Remote Sensing
Sensors 2022, 22(20), 7868; https://doi.org/10.3390/s22207868 - 16 Oct 2022
Viewed by 944
Abstract
Point clouds are one of the most widely used data formats produced by depth sensors. There is a lot of research into feature extraction from unordered and irregular point cloud data. Deep learning in computer vision achieves great performance for data classification and [...] Read more.
Point clouds are one of the most widely used data formats produced by depth sensors. There is a lot of research into feature extraction from unordered and irregular point cloud data. Deep learning in computer vision achieves great performance for data classification and segmentation of 3D data points as point clouds. Various research has been conducted on point clouds and remote sensing tasks using deep learning (DL) methods. However, there is a research gap in providing a road map of existing work, including limitations and challenges. This paper focuses on introducing the state-of-the-art DL models, categorized by the structure of the data they consume. The models’ performance is collected, and results are provided for benchmarking on the most used datasets. Additionally, we summarize the current benchmark 3D datasets publicly available for DL training and testing. In our comparative study, we can conclude that convolutional neural networks (CNNs) achieve the best performance in various remote-sensing applications while being light-weighted models, namely Dynamic Graph CNN (DGCNN) and ConvPoint. Full article
(This article belongs to the Special Issue Artificial Neural Networks for IoT-Enabled Smart Applications)
Show Figures

Figure 1

Other

Jump to: Research, Review

Systematic Review
E-Cardiac Care: A Comprehensive Systematic Literature Review
Sensors 2022, 22(20), 8073; https://doi.org/10.3390/s22208073 - 21 Oct 2022
Viewed by 558
Abstract
The Internet of Things (IoT) is a complete ecosystem encompassing various communication technologies, sensors, hardware, and software. IoT cutting-edge technologies and Artificial Intelligence (AI) have enhanced the traditional healthcare system considerably. The conventional healthcare system faces many challenges, including avoidable long wait times, [...] Read more.
The Internet of Things (IoT) is a complete ecosystem encompassing various communication technologies, sensors, hardware, and software. IoT cutting-edge technologies and Artificial Intelligence (AI) have enhanced the traditional healthcare system considerably. The conventional healthcare system faces many challenges, including avoidable long wait times, high costs, a conventional method of payment, unnecessary long travel to medical centers, and mandatory periodic doctor visits. A Smart healthcare system, Internet of Things (IoT), and AI are arguably the best-suited tailor-made solutions for all the flaws related to traditional healthcare systems. The primary goal of this study is to determine the impact of IoT, AI, various communication technologies, sensor networks, and disease detection/diagnosis in Cardiac healthcare through a systematic analysis of scholarly articles. Hence, a total of 104 fundamental studies are analyzed for the research questions purposefully defined for this systematic study. The review results show that deep learning emerges as a promising technology along with the combination of IoT in the domain of E-Cardiac care with enhanced accuracy and real-time clinical monitoring. This study also pins down the key benefits and significant challenges for E-Cardiology in the domains of IoT and AI. It further identifies the gaps and future research directions related to E-Cardiology, monitoring various Cardiac parameters, and diagnosis patterns. Full article
(This article belongs to the Special Issue Artificial Neural Networks for IoT-Enabled Smart Applications)
Show Figures

Figure 1

Back to TopTop