E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "Wearable Smart Devices"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (31 May 2018)

Special Issue Editors

Guest Editor
Dr. Geoff Merrett

Cyber Physical Systems Research Group, Department of Electronics and Computer Science, University of Southampton, SO17 1BJ, UK
Website | E-Mail
Interests: mobile and embedded systems; power/energy management; energy harvesting; energy-driven computing; intermittent computing
Guest Editor
Dr. Russel Torah

Smart Electronic Materials and Systems Research Group, Department of Electronics and Computer Science, University of Southampton, SO17 1BJ, UK
Website | E-Mail
Interests: smart textiles; printed electronics; wearable electronics; electronics inks; medical textiles; energy harvesting

Special Issue Information

Dear Colleagues,

Over the past decade, we have seen considerable developments in smart wearable devices and technology. The proliferation of devices—from fitness trackers and healthcare monitors to smart watches and mobile computing—has been fuelled by a combination of advances in underpinning technology and consumer demand/acceptance. However, despite its growing maturity, there remain numerous challenges in a broad range of areas that require research effort in order to further the technology.

To highlight some of the latest developments in this exciting and relevant field, we invite you to consider submitting a manuscript to our upcoming Special Issue “Wearable Smart Devices”. Both research papers and review articles will be considered. We welcome submissions spanning topics across sensor devices, wearable technologies, and embedded intelligence for smart wearable devices.

Dr. Geoff Merrett
Dr. Russel Torah
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

In the context of smart wearable devices, we solicit papers covering (but not limited to) one or more of the following topics:

  • Sensor devices and technologies
  • Healthcare and medical prototypes and applications
  • Innovative applications and case studies
  • Hardware and software co-design and architectures
  • Smart textiles and printed electronics/sensors
  • Miniaturisation, integration, packaging, wearability and user-acceptance
  • Reliability, washability and durability
  • Wearable IoT
  • Intelligent algorithms, data processing and inference
  • Data fusion or processing for accurate signal estimation
  • Networking and interoperability
  • Energy harvesting
  • System energy/power management

Published Papers (8 papers)

View options order results:
result details:
Displaying articles 1-8
Export citation of selected articles as:

Research

Open AccessArticle Thermal Energy Harvesting on the Bodily Surfaces of Arms and Legs through a Wearable Thermo-Electric Generator
Sensors 2018, 18(6), 1927; https://doi.org/10.3390/s18061927
Received: 19 March 2018 / Revised: 8 June 2018 / Accepted: 11 June 2018 / Published: 13 June 2018
PDF Full-text (8182 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
This work analyzes the results of measurements on thermal energy harvesting through a wearable Thermo-electric Generator (TEG) placed on the arms and legs. Four large skin areas were chosen as locations for the placement of the TEGs. In order to place the generator
[...] Read more.
This work analyzes the results of measurements on thermal energy harvesting through a wearable Thermo-electric Generator (TEG) placed on the arms and legs. Four large skin areas were chosen as locations for the placement of the TEGs. In order to place the generator on the body, a special manufactured band guaranteed the proper contact between the skin and TEG. Preliminary measurements were performed to find out the value of the resistor load which maximizes the power output. Then, an experimental investigation was conducted for the measurement of harvested energy while users were performing daily activities, such as sitting, walking, jogging, and riding a bike. The generated power values were in the range from 5 to 50 μW. Moreover, a preliminary hypothesis based on the obtained results indicates the possibility to use TEGs on leg for the recognition of locomotion activities. It is due to the rather high and different biomechanical work, produced by the gastrocnemius muscle, while the user is walking rather than jogging or riding a bike. This result reflects a difference between temperatures associated with the performance of different activities. Full article
(This article belongs to the Special Issue Wearable Smart Devices)
Figures

Figure 1

Open AccessArticle Developing an Acoustic Sensing Yarn for Health Surveillance in a Military Setting
Sensors 2018, 18(5), 1590; https://doi.org/10.3390/s18051590
Received: 23 April 2018 / Revised: 8 May 2018 / Accepted: 14 May 2018 / Published: 17 May 2018
PDF Full-text (4530 KB) | HTML Full-text | XML Full-text
Abstract
Overexposure to high levels of noise can cause permanent hearing disorders, which have a significant adverse effect on the quality of life of those affected. Injury due to noise can affect people in a variety of careers including construction workers, factory workers, and
[...] Read more.
Overexposure to high levels of noise can cause permanent hearing disorders, which have a significant adverse effect on the quality of life of those affected. Injury due to noise can affect people in a variety of careers including construction workers, factory workers, and members of the armed forces. By monitoring the noise exposure of workers, overexposure can be avoided and suitable protective equipment can be provided. This work focused on the creation of a noise dosimeter suitable for use by members of the armed forces, where a discrete dosimeter was integrated into a textile helmet cover. In this way the sensing elements could be incorporated very close to the ears, providing a highly representative indication of the sound level entering the body, and also creating a device that would not interfere with military activities. This was achieved by utilising commercial microelectromechanical system microphones integrated within the fibres of yarn to create an acoustic sensing yarn. The acoustic sensing yarns were fully characterised over a range of relevant sound levels and frequencies at each stage in the yarn production process. The yarns were ultimately integrated into a knitted helmet cover to create a functional acoustic sensing helmet cover prototype. Full article
(This article belongs to the Special Issue Wearable Smart Devices)
Figures

Figure 1

Open AccessArticle Unifying Terrain Awareness for the Visually Impaired through Real-Time Semantic Segmentation
Sensors 2018, 18(5), 1506; https://doi.org/10.3390/s18051506
Received: 5 April 2018 / Revised: 5 May 2018 / Accepted: 8 May 2018 / Published: 10 May 2018
PDF Full-text (12727 KB) | HTML Full-text | XML Full-text
Abstract
Navigational assistance aims to help visually-impaired people to ambulate the environment safely and independently. This topic becomes challenging as it requires detecting a wide variety of scenes to provide higher level assistive awareness. Vision-based technologies with monocular detectors or depth sensors have sprung
[...] Read more.
Navigational assistance aims to help visually-impaired people to ambulate the environment safely and independently. This topic becomes challenging as it requires detecting a wide variety of scenes to provide higher level assistive awareness. Vision-based technologies with monocular detectors or depth sensors have sprung up within several years of research. These separate approaches have achieved remarkable results with relatively low processing time and have improved the mobility of impaired people to a large extent. However, running all detectors jointly increases the latency and burdens the computational resources. In this paper, we put forward seizing pixel-wise semantic segmentation to cover navigation-related perception needs in a unified way. This is critical not only for the terrain awareness regarding traversable areas, sidewalks, stairs and water hazards, but also for the avoidance of short-range obstacles, fast-approaching pedestrians and vehicles. The core of our unification proposal is a deep architecture, aimed at attaining efficient semantic understanding. We have integrated the approach in a wearable navigation system by incorporating robust depth segmentation. A comprehensive set of experiments prove the qualified accuracy over state-of-the-art methods while maintaining real-time speed. We also present a closed-loop field test involving real visually-impaired users, demonstrating the effectivity and versatility of the assistive framework. Full article
(This article belongs to the Special Issue Wearable Smart Devices)
Figures

Graphical abstract

Open AccessArticle Use of the Stockwell Transform in the Detection of P300 Evoked Potentials with Low-Cost Brain Sensors
Sensors 2018, 18(5), 1483; https://doi.org/10.3390/s18051483
Received: 23 February 2018 / Revised: 14 April 2018 / Accepted: 21 April 2018 / Published: 9 May 2018
PDF Full-text (7855 KB) | HTML Full-text | XML Full-text
Abstract
The evoked potential is a neuronal activity that originates when a stimulus is presented. To achieve its detection, various techniques of brain signal processing can be used. One of the most studied evoked potentials is the P300 brain wave, which usually appears between
[...] Read more.
The evoked potential is a neuronal activity that originates when a stimulus is presented. To achieve its detection, various techniques of brain signal processing can be used. One of the most studied evoked potentials is the P300 brain wave, which usually appears between 300 and 500 ms after the stimulus. Currently, the detection of P300 evoked potentials is of great importance due to its unique properties that allow the development of applications such as spellers, lie detectors, and diagnosis of psychiatric disorders. The present study was developed to demonstrate the usefulness of the Stockwell transform in the process of identifying P300 evoked potentials using a low-cost electroencephalography (EEG) device with only two brain sensors. The acquisition of signals was carried out using the Emotiv EPOC® device—a wireless EEG headset. In the feature extraction, the Stockwell transform was used to obtain time-frequency information. The algorithms of linear discriminant analysis and a support vector machine were used in the classification process. The experiments were carried out with 10 participants; men with an average age of 25.3 years in good health. In general, a good performance (75–92%) was obtained in identifying P300 evoked potentials. Full article
(This article belongs to the Special Issue Wearable Smart Devices)
Figures

Figure 1

Open AccessArticle A Wearable Body Controlling Device for Application of Functional Electrical Stimulation
Sensors 2018, 18(4), 1251; https://doi.org/10.3390/s18041251
Received: 5 March 2018 / Revised: 14 April 2018 / Accepted: 17 April 2018 / Published: 18 April 2018
PDF Full-text (14241 KB) | HTML Full-text | XML Full-text
Abstract
In this research, we describe a new balancing device used to stabilize the rear quarters of a patient dog with spinal cord injuries. Our approach uses inertial measurement sensing and direct leg actuation to lay a foundation for eventual muscle control by means
[...] Read more.
In this research, we describe a new balancing device used to stabilize the rear quarters of a patient dog with spinal cord injuries. Our approach uses inertial measurement sensing and direct leg actuation to lay a foundation for eventual muscle control by means of direct functional electrical stimulation (FES). During this phase of development, we designed and built a mechanical test-bed to develop the control and stimulation algorithms before we use the device on our animal subjects. We designed the bionic test-bed to mimic the typical walking gait of a dog and use it to develop and test the functionality of the balancing device for stabilization of patient dogs with hindquarter paralysis. We present analysis for various muscle stimulation and balancing strategies, and our device can be used by veterinarians to tailor the stimulation strength and temporal distribution for any individual patient dog. We develop stabilizing muscle stimulation strategies using the robotic test-bed to enhance walking stability. We present experimental results using the bionic test-bed to demonstrate that the balancing device can provide an effective sensing strategy and deliver the required motion control commands for stabilizing an actual dog with a spinal cord injury. Full article
(This article belongs to the Special Issue Wearable Smart Devices)
Figures

Figure 1

Open AccessArticle A Novel GMM-Based Behavioral Modeling Approach for Smartwatch-Based Driver Authentication
Sensors 2018, 18(4), 1007; https://doi.org/10.3390/s18041007
Received: 14 January 2018 / Revised: 27 February 2018 / Accepted: 1 March 2018 / Published: 28 March 2018
PDF Full-text (4492 KB) | HTML Full-text | XML Full-text
Abstract
All drivers have their own distinct driving habits, and usually hold and operate the steering wheel differently in different driving scenarios. In this study, we proposed a novel Gaussian mixture model (GMM)-based method that can improve the traditional GMM in modeling driving behavior.
[...] Read more.
All drivers have their own distinct driving habits, and usually hold and operate the steering wheel differently in different driving scenarios. In this study, we proposed a novel Gaussian mixture model (GMM)-based method that can improve the traditional GMM in modeling driving behavior. This new method can be applied to build a better driver authentication system based on the accelerometer and orientation sensor of a smartwatch. To demonstrate the feasibility of the proposed method, we created an experimental system that analyzes driving behavior using the built-in sensors of a smartwatch. The experimental results for driver authentication—an equal error rate (EER) of 4.62% in the simulated environment and an EER of 7.86% in the real-traffic environment—confirm the feasibility of this approach. Full article
(This article belongs to the Special Issue Wearable Smart Devices)
Figures

Figure 1

Open AccessArticle A Portable Wireless Communication Platform Based on a Multi-Material Fiber Sensor for Real-Time Breath Detection
Sensors 2018, 18(4), 973; https://doi.org/10.3390/s18040973
Received: 6 February 2018 / Revised: 20 March 2018 / Accepted: 23 March 2018 / Published: 25 March 2018
PDF Full-text (1211 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we present a new mobile wireless communication platform for real-time monitoring of an individual’s breathing rate. The platform takes the form of a wearable stretching T-shirt featuring a sensor and a detection base station. The sensor is formed by a
[...] Read more.
In this paper, we present a new mobile wireless communication platform for real-time monitoring of an individual’s breathing rate. The platform takes the form of a wearable stretching T-shirt featuring a sensor and a detection base station. The sensor is formed by a spiral-shaped antenna made from a multi-material fiber connected to a compact transmitter. Based on the resonance frequency of the antenna at approximately 2.4 GHz, the breathing sensor relies on its Bluetooth transmitter. The contactless and non-invasive sensor is designed without compromising the user’s comfort. The sensing mechanism of the system is based on the detection of the signal amplitude transmitted wirelessly by the sensor, which is found to be sensitive to strain. We demonstrate the capability of the platform to detect the breathing rates of four male volunteers who are not in movement. The breathing pattern is obtained through the received signal strength indicator (RSSI) which is filtered and analyzed with home-made algorithms in the portable system. Numerical simulations of human breath are performed to support the experimental detection, and both results are in a good agreement. Slow, fast, regular, irregular, and shallow breathing types are successfully recorded within a frequency interval of 0.16–1.2 Hz, leading to a breathing rate varying from 10 to 72 breaths per minute. Full article
(This article belongs to the Special Issue Wearable Smart Devices)
Figures

Figure 1

Open AccessArticle Facial Emotion Recognition: A Survey and Real-World User Experiences in Mixed Reality
Sensors 2018, 18(2), 416; https://doi.org/10.3390/s18020416
Received: 10 December 2017 / Revised: 23 January 2018 / Accepted: 26 January 2018 / Published: 1 February 2018
Cited by 1 | PDF Full-text (15954 KB) | HTML Full-text | XML Full-text
Abstract
Extensive possibilities of applications have made emotion recognition ineluctable and challenging in the field of computer science. The use of non-verbal cues such as gestures, body movement, and facial expressions convey the feeling and the feedback to the user. This discipline of Human–Computer
[...] Read more.
Extensive possibilities of applications have made emotion recognition ineluctable and challenging in the field of computer science. The use of non-verbal cues such as gestures, body movement, and facial expressions convey the feeling and the feedback to the user. This discipline of Human–Computer Interaction places reliance on the algorithmic robustness and the sensitivity of the sensor to ameliorate the recognition. Sensors play a significant role in accurate detection by providing a very high-quality input, hence increasing the efficiency and the reliability of the system. Automatic recognition of human emotions would help in teaching social intelligence in the machines. This paper presents a brief study of the various approaches and the techniques of emotion recognition. The survey covers a succinct review of the databases that are considered as data sets for algorithms detecting the emotions by facial expressions. Later, mixed reality device Microsoft HoloLens (MHL) is introduced for observing emotion recognition in Augmented Reality (AR). A brief introduction of its sensors, their application in emotion recognition and some preliminary results of emotion recognition using MHL are presented. The paper then concludes by comparing results of emotion recognition by the MHL and a regular webcam. Full article
(This article belongs to the Special Issue Wearable Smart Devices)
Figures

Figure 1

Back to Top