Sensing and Signal Processing in Smart Healthcare

In the last decade, we have seen rapid development of electronic technologies that are transforming our daily lives [...]


Introduction
In the last decade, we have seen rapid development of electronic technologies that are transforming our daily lives. Such technologies often integrate with various sensors that facilitate the collection of human motion and physiological data, and are equipped with wireless communication modules such as Bluetooth, radio frequency identification (RFID), and near field communication (NFC). In smart healthcare applications [1], designing ergonomic and intuitive human-computer interfaces is crucial, because a system that is not easy to use will create a huge obstacle to adoption and may significantly reduce the efficacy of the solution. Signal and data processing is another important consideration in smart healthcare applications because it must ensure high accuracy with a high confidence level in order for the applications to be useful for clinicians to take diagnosis and treatment decisions. In this Special Issue, we received a total of 26 contributions and accepted 10 of them. These contributions are mostly from authors in Europe, including Italy, Spain, France, Portugal, Romania, Sweden, and Netherlands. There are also authors from China, Korea, Taiwan, Indonesia, and Ecuador. Soon after publication, all 10 papers have been cited. The average citation count per paper is 7. One of the papers [2] has already been cited 22 times. The accepted papers can be roughly divided into two categories: (1) signal processing, and (2) smart healthcare systems.

Signal Processing
Five of the 10 papers in this special issue are related to signal processing. Two of them used traditional methods, and the remaining three used machine learning algorithms.
In [3], Ricci and Meacci aimed to address the need to detect the weak fluid signal while not saturating at the pipe wall components in pulse-wave Doppler ultrasound. The weak fluid signal may contain critical information regarding the industrial fluids and suspensions flowing in blood pipes. They proposed a numerical demodulator architecture that auto-tunes its internal dynamics to adapt to the feature of the actual input signal. They validated the proposed demodulator both through simulation and through experiments. For the latter, they integrated the demodulator into a system for the detection of the velocity profile of fluids flowing in blood pipes. Their data-adaptive demodulator produces a noise reduction of at least of 20 dB with respect to competing approaches, and could recover a correct velocity profile even when the input data are sampled at reduced 8-bits from the typical 12-16 bits.
In [4], Pham and Suh proposed a method to improve the state-of-the-art of simulation data for human activities. The availability of high fidelity simulation data would help researchers experiment with different human activity classification and detection algorithms. The simulation data are based on position and attitude data collected via inertial sensors mounted on the foot. The position and attitude data are then used as the control points for simulation data generation using spline functions. They validated their data generation algorithm with two scenarios including 2D walking path and 3D walking path.
In [5], Torti et al. presented their study on the delineation of brain cancer, which is an important step that helps guide neurosurgeons in the tumor resection. More specifically, they addressed the performance issue on using K-means clustering algorithm to delineate brain cancer using a parallel architecture. With the improvement, the algorithm can provide real-time processing to guide the neurosurgeon during the tumor resection task. The proposed parallel K-means clustering can work with the OpenMP, CUDA and OpenCL paradigms, and it has been validated through an in-vivo hyperspectral human brain image database. They show that their algorithm can achieve a speed-up of about 150 times with respect to sequential processing.
In [6], Calamanti et al. reported an exploratory study on using machine learning methods to detect endothelial dysfunction, which is critical to early diagnosis of cardiovascular diseases, using photoplethysmography signals. For their study, they built a new dataset from the data collected from 59 subjects. They experimented with three classifiers, namely, support vector machine (SVM), random forest, and k-nearest neighbors. They show that SVM outperforms others with a 71% accuracy. By including anthropometric features, they were able to improve the recall rate from 59% to 67%.
In [7], Nurmaini et al. proposed to use deep learning to extract features in ECG electrocardiogram (ECG) data for machine-learning based classification on normal and abnormal heartbeats. Stacked denoising autoencoders and autoencoders are used for feature learning during the pre-training phase. Deep neural networks are used during the fine-tuning phase. They used the MIT-BIH Arrhythmia Database and the MIT-BIH Noise Stress Test Database on ECG to validate the proposed approach. They experimented with six models to select the best deep learning model and demonstrated excellent results in terms of the classification accuracy, sensitivity, specificity, precision, and F1-score.

Smart Healthcare Systems
Five papers in this special issue are related to systems towards smart healthcare. Two of the papers aimed at the detection of the presence of a human being at specific locations either using pre-placed sensors [2], or using Bluetooth signals (assuming that the person being monitored carries a Bluetooth device) [8]. The remaining three papers aimed at direct human activity detection using inertial sensors [9], time of flight sensors [9], near-infrared camera [10], and Microsoft Kinect sensor [11].
In [2], Bassoli et al. proposed a system for human activity monitoring using a set of sensors at specific locations, such as armchair sensor at kitchen area, magnetic contact sensor on the bedroom door, toilet sensor, passive infrared sensor in the bedroom. These sensors are directly connected to the Internet to report their data to a predefined cloud service. The contribution of the paper is to experiment with different ways of saving the battery power on these sensors without losing any monitoring accuracy.
In [8], Marin et al. presented their work on the implementation of intelligent luminaries with sensing and communication capabilities for use in smart home. The system consists of a server, smart bulbs, and dummy bulbs. The server is responsible to collect and store data collected from the bulbs, and generating reporting and alerting based on the data collected. The smart bulbs are connected to the server using WiFi. The dummy bulbs are connected to smart bulbs using Bluetooth. Both the dummy bulbs and smart bulbs are capable of sensing Bluetooth signals for indoor localization. They both have the logic to control the intensity of their LEDs. The smart bulb is powered by a Raspberry Pi 3 and is equipped with a variety of environmental sensors, such as temperature, humidity, CO 2 , and ambient light intensity. It could also send data to connected medical devices. The system can generate two types of alerts. One type of alerts is when the room temperature has exceeded a predefined threshold. The second type of alerts is when the monitored person has been present in the bathroom for too long.
In [11], Rybarczyk et al. described a tele-rehabilitation system for patients that have received hip replacement surgery. Two methods were experimented for rehabilitation activity recognition based on data collected by a Microsoft Kinect sensor [12]. One is dynamic time warping, and the other is hidden Markov model. The authors also conducted a cognitive walkthrough to assess the activity recognition accuracy as as well the system's usability.
In [9], Xu et al. proposed to use sensor fusion to improve the measurement accuracy with inertial-measurement-unit (IMU) and time-of-arrival (ToA) devices. The latter is used to mitigate drift and accumulative errors of the former. They used simulation to demonstrate the better performance over individual IMU or ToA approaches in human motion tracking, particularly when the human moving direction changes. In addition, the authors performed a comprehensive fundamental limits analysis of their fusion method. They believe that their work paved the way for the method's use in wearable motion tracking applications, such as smart health.
In [10], Wang et al. reported a non-intrusive video-based sleep monitoring system. They addressed some major technical challenges in detecting human sleep poses with infrared images. They first identify joint positions and build a human model that is robust to occlusion. They then derived sleep poses from the joint positions using probabilistic reasoning to further overcome the missing joint data due to occlusion. They validated their system using video polysomnography data recorded in a sleep laboratory and the result is quite promising.