AI and ML in the Future of Wearable Devices

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Bioelectronics".

Deadline for manuscript submissions: closed (30 June 2022) | Viewed by 17637

Special Issue Editor


E-Mail Website
Guest Editor
Polytechnic School, University of Alcala, 28871 Alcala de Henares, Madrid, Spain
Interests: automated AI planning and scheduling; intelligent monitoring and execution; robotics and machine learning

Special Issue Information

Dear Colleagues,

Wearable industry will experience an enormous boom in the near future, and it aims to influence the fields of health and medicine, aging, disability, and gaming, among others. We will be surrounded by data and providing meaning to enhance the efficiency is very important. Artificial intelligence and machine learning can provide solutions.

The aim of this Special Issue is to seek high-quality contributions that highlight methodologies, applications, and algorithms of machine learning for wearable devices. Surveys of the state-of-the-art are also welcomed. Topics of interest include but are not limited to the following:

  • Wearable technology and gaming;
  • Clothing technology;
  • Monitoring systems for assisted living;
  • Intelligent algorithms that are able to infer information from collected data;
  • Statistical classification methods;
  • Neural networks;
  • Deep learning;
  • Technology to integrate into textiles;
  • Smart connected things;
  • Wearable simulation.

Prof. Dr. Maria D. R-Moreno
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Artificial intelligence
  • Machine learning
  • IoT
  • Simulation
  • Neural networks
  • Deep learning
  • Wearable technology and health
  • ScoT

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 3280 KiB  
Article
Methods for Continuous Blood Pressure Estimation Using Temporal Convolutional Neural Networks and Ensemble Empirical Mode Decomposition
by Kai Zhou, Zhixiang Yin, Yu Peng and Zhiliang Zeng
Electronics 2022, 11(9), 1378; https://doi.org/10.3390/electronics11091378 - 26 Apr 2022
Cited by 7 | Viewed by 2529
Abstract
Arterial blood pressure is not only an important index that must be measured in routine physical examination but also a key monitoring parameter of the cardiovascular system in cardiac surgery, drug testing, and intensive care. To improve the measurement accuracy of continuous blood [...] Read more.
Arterial blood pressure is not only an important index that must be measured in routine physical examination but also a key monitoring parameter of the cardiovascular system in cardiac surgery, drug testing, and intensive care. To improve the measurement accuracy of continuous blood pressure, this paper uses photoplethysmography (PPG) signals to estimate diastolic blood pressure and systolic blood pressure based on ensemble empirical mode decomposition (EEMD) and temporal convolutional network (TCN). In this method, the clean PPG signal is decomposed by EEMD to obtain n-order intrinsic mode functions (IMF), and then the IMF and the original PPG are input into the constructed TCN neural network model, and the results are output. The results show that TCN has better performance than CNN, CNN-LSTM, and CNN-GRU. Using the data added with IMF, the results of the above neural network model are better than those of the model with only PPG as input, in which the systolic blood pressure (SBP) and diastolic blood pressure (DBP) results of EEMD-TCN are −1.55 ± 9.92 mmHg and 0.41 ± 4.86 mmHg. According to the estimation results, DBP meets the requirements of the AAMI standard, BHS evaluates it as Grade A, SD of SBP is close to the standard AAMI, and BHS evaluates it as Grade B. Full article
(This article belongs to the Special Issue AI and ML in the Future of Wearable Devices)
Show Figures

Figure 1

21 pages, 1776 KiB  
Article
Improved Human Activity Recognition Using Majority Combining of Reduced-Complexity Sensor Branch Classifiers
by Julian Webber, Abolfazl Mehbodniya, Ahmed Arafa and Ahmed Alwakeel
Electronics 2022, 11(3), 392; https://doi.org/10.3390/electronics11030392 - 28 Jan 2022
Cited by 9 | Viewed by 2417
Abstract
Human activity recognition (HAR) employs machine learning for the automated recognition of motion and has widespread applications across healthcare, daily-life and security spaces. High performances have especially been demonstrated using video cameras and intensive signal processing such as the convolutional neural network (CNN). [...] Read more.
Human activity recognition (HAR) employs machine learning for the automated recognition of motion and has widespread applications across healthcare, daily-life and security spaces. High performances have especially been demonstrated using video cameras and intensive signal processing such as the convolutional neural network (CNN). However, lower complexity algorithms operating on low-rate inertial data is a promising approach for portable use-cases such as pairing with smart wearables. This work considers the performance benefits from combining HAR classification estimates from multiple sensors each with lower-complexity processing compared with a higher-complexity single-sensor classifier. We show that while the highest single-sensor classification accuracy of 91% can be achieved for seven activities with optimized number of hidden units and sample rate, the classification accuracy is reduced to 56% with a reduced-complexity 50-neuron classifier. However, by majority combining the predictions of three and four low-complexity classifiers, the average classification accuracy increased to 82.5% and 94.4%, respectively, demonstrating the efficacy of this approach. Full article
(This article belongs to the Special Issue AI and ML in the Future of Wearable Devices)
Show Figures

Figure 1

16 pages, 456 KiB  
Article
Are Microcontrollers Ready for Deep Learning-Based Human Activity Recognition?
by Atis Elsts and Ryan McConville
Electronics 2021, 10(21), 2640; https://doi.org/10.3390/electronics10212640 - 28 Oct 2021
Cited by 11 | Viewed by 3227
Abstract
The last decade has seen exponential growth in the field of deep learning with deep learning on microcontrollers a new frontier for this research area. This paper presents a case study about machine learning on microcontrollers, with a focus on human activity recognition [...] Read more.
The last decade has seen exponential growth in the field of deep learning with deep learning on microcontrollers a new frontier for this research area. This paper presents a case study about machine learning on microcontrollers, with a focus on human activity recognition using accelerometer data. We build machine learning classifiers suitable for execution on modern microcontrollers and evaluate their performance. Specifically, we compare Random Forests (RF), a classical machine learning technique, with Convolutional Neural Networks (CNN), in terms of classification accuracy and inference speed. The results show that RF classifiers achieve similar levels of classification accuracy while being several times faster than a small custom CNN model designed for the task. The RF and the custom CNN are also several orders of magnitude faster than state-of-the-art deep learning models. On the one hand, these findings confirm the feasibility of using deep learning on modern microcontrollers. On the other hand, they cast doubt on whether deep learning is the best approach for this application, especially if high inference speed and, thus, low energy consumption is the key objective. Full article
(This article belongs to the Special Issue AI and ML in the Future of Wearable Devices)
Show Figures

Figure 1

12 pages, 715 KiB  
Article
Audio Feature Engineering for Occupancy and Activity Estimation in Smart Buildings
by Gabriela Santiago, Marvin Jiménez, Jose Aguilar and Edwin Montoya
Electronics 2021, 10(21), 2599; https://doi.org/10.3390/electronics10212599 - 24 Oct 2021
Cited by 1 | Viewed by 1592
Abstract
The occupancy and activity estimation are fields that have been severally researched in the past few years. However, the different techniques used include a mixture of atmospheric features such as humidity and temperature, many devices such as cameras and audio sensors, or they [...] Read more.
The occupancy and activity estimation are fields that have been severally researched in the past few years. However, the different techniques used include a mixture of atmospheric features such as humidity and temperature, many devices such as cameras and audio sensors, or they are limited to speech recognition. In this work is proposed that the occupancy and activity can be estimated only from the audio information using an automatic approach of audio feature engineering to extract, analyze and select descriptors/variables. This scheme of extraction of audio descriptors is used to determine the occupation and activity in specific smart environments, such that our approach can differentiate between academic, administrative or commercial environments. Our approach from the audio feature engineering is compared to previous similar works on occupancy estimation and/or activity estimation in smart buildings (most of them including other features, such as atmospherics and visuals). In general, the results obtained are very encouraging compared to previous studies. Full article
(This article belongs to the Special Issue AI and ML in the Future of Wearable Devices)
Show Figures

Figure 1

22 pages, 3640 KiB  
Article
Human-Mimetic Estimation of Food Volume from a Single-View RGB Image Using an AI System
by Zhengeng Yang, Hongshan Yu, Shunxin Cao, Qi Xu, Ding Yuan, Hong Zhang, Wenyan Jia, Zhi-Hong Mao and Mingui Sun
Electronics 2021, 10(13), 1556; https://doi.org/10.3390/electronics10131556 - 28 Jun 2021
Cited by 14 | Viewed by 3069
Abstract
It is well known that many chronic diseases are associated with unhealthy diet. Although improving diet is critical, adopting a healthy diet is difficult despite its benefits being well understood. Technology is needed to allow an assessment of dietary intake accurately and easily [...] Read more.
It is well known that many chronic diseases are associated with unhealthy diet. Although improving diet is critical, adopting a healthy diet is difficult despite its benefits being well understood. Technology is needed to allow an assessment of dietary intake accurately and easily in real-world settings so that effective intervention to manage being overweight, obesity, and related chronic diseases can be developed. In recent years, new wearable imaging and computational technologies have emerged. These technologies are capable of performing objective and passive dietary assessments with a much simplified procedure than traditional questionnaires. However, a critical task is required to estimate the portion size (in this case, the food volume) from a digital image. Currently, this task is very challenging because the volumetric information in the two-dimensional images is incomplete, and the estimation involves a great deal of imagination, beyond the capacity of the traditional image processing algorithms. In this work, we present a novel Artificial Intelligent (AI) system to mimic the thinking of dietitians who use a set of common objects as gauges (e.g., a teaspoon, a golf ball, a cup, and so on) to estimate the portion size. Specifically, our human-mimetic system “mentally” gauges the volume of food using a set of internal reference volumes that have been learned previously. At the output, our system produces a vector of probabilities of the food with respect to the internal reference volumes. The estimation is then completed by an “intelligent guess”, implemented by an inner product between the probability vector and the reference volume vector. Our experiments using both virtual and real food datasets have shown accurate volume estimation results. Full article
(This article belongs to the Special Issue AI and ML in the Future of Wearable Devices)
Show Figures

Figure 1

15 pages, 1057 KiB  
Article
A Simulator to Support Machine Learning-Based Wearable Fall Detection Systems
by Armando Collado-Villaverde, Mario Cobos, Pablo Muñoz and David F. Barrero
Electronics 2020, 9(11), 1831; https://doi.org/10.3390/electronics9111831 - 3 Nov 2020
Cited by 8 | Viewed by 2410
Abstract
People’s life expectancy is increasing, resulting in a growing elderly population. That population is subject to dependency issues, falls being a problematic one due to the associated health complications. Some projects are trying to enhance the independence of elderly people by monitoring their [...] Read more.
People’s life expectancy is increasing, resulting in a growing elderly population. That population is subject to dependency issues, falls being a problematic one due to the associated health complications. Some projects are trying to enhance the independence of elderly people by monitoring their status, typically by means of wearable devices. These devices often feature Machine Learning (ML) algorithms for fall detection using accelerometers. However, the software deployed often lacks reliable data for the models’ training. To overcome such an issue, we have developed a publicly available fall simulator capable of recreating accelerometer fall samples of two of the most common types of falls: syncope and forward. Those simulated samples are like real falls recorded using real accelerometers in order to use them later as input for ML applications. To validate our approach, we have used different classifiers over both simulated falls and data from two public datasets based on real data. Our tests show that the fall simulator achieves a high accuracy for generating accelerometer data from a fall, allowing to create larger datasets for training fall detection software in wearable devices. Full article
(This article belongs to the Special Issue AI and ML in the Future of Wearable Devices)
Show Figures

Figure 1

Back to TopTop