sensors-logo

Journal Browser

Journal Browser

Special Issue "Multimodal Data Fusion and Machine-Learning for Healthcare"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (30 September 2020) | Viewed by 15318

Special Issue Editor

Prof. Dr. Dian Tjondronegoro
E-Mail Website
Guest Editor
Department of Business Strategy and Innovation Griffith Business School, Griffith University, Queensland, Australia
Interests: mobile and pervasive systems; e-health; affective computing; multimedia analysis; interaction design
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The Internet of things enables multi-modal data fusion and analysis to analyze person-centric data, and ultimately personalizes healthcare services delivery. Wireless, wearable, and ambient technologies support a more ongoing monitoring of people’s wellbeing status and of the contributing factors, such as environmental quality. The interconnecting cloud services bring together and integrate these data to streamline health assessments, while ensuring privacy and information security computationally. Machine learning for the intelligent analysis of health data is integral to long-term health tracking, forecasting, and early warning, as well as for the prevention and management of chronic diseases. Health practitioners should investigate the impact of technologies in order to achieve better planning, design, delivery, and evaluation of future healthcare services.

The goal of the Special Issue is to publish recent results and applications of integrated sensors, and machine learning enabled healthcare from academia and industry. We invite original and unpublished manuscripts that are related, but limited to, these topics:

  • Integration of sensors-enabled monitoring of physical and psychological activities, including all of the considerations regarding data reliability, privacy, security, safety, comfort, and ease-of-use.
  • Virtual reality, augmented reality, mixed reality, data visualization, and gamification for effective display and for use of integrated health sensors data.
  • Multi-modal data fusion and analysis techniques for measuring health and wellbeing conditions, as well as environmental quality, such as human’s vital signs, physical exertions, sleep quality, mental stress, mood, and brain activity.
  • Machine learning algorithms that support efficient and continuous learning from health data, including new datasets, techniques, and systems that can enable the rapid development of intelligent health analyses.
  • Methods, concepts, and principles for evaluating the impact of sensors on data-driven and evidence-based health monitoring, diagnosis, and interventions.
  • New and emerging integrated sensors technologies and systems that are more robust, sustainable, and secure for real-life deployments and applications.

Prof. Dian Tjondronegoro
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Integrated medical sensors
  • Multi-modal data fusion in healthcare
  • Machine learning in healthcare
  • Visualization and gamification of health big data

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
EAGA-MLP—An Enhanced and Adaptive Hybrid Classification Model for Diabetes Diagnosis
Sensors 2020, 20(14), 4036; https://doi.org/10.3390/s20144036 - 20 Jul 2020
Cited by 59 | Viewed by 1932
Abstract
Disease diagnosis is a critical task which needs to be done with extreme precision. In recent times, medical data mining is gaining popularity in complex healthcare problems based disease datasets. Unstructured healthcare data constitutes irrelevant information which can affect the prediction ability of [...] Read more.
Disease diagnosis is a critical task which needs to be done with extreme precision. In recent times, medical data mining is gaining popularity in complex healthcare problems based disease datasets. Unstructured healthcare data constitutes irrelevant information which can affect the prediction ability of classifiers. Therefore, an effective attribute optimization technique must be used to eliminate the less relevant data and optimize the dataset for enhanced accuracy. Type 2 Diabetes, also called Pima Indian Diabetes, affects millions of people around the world. Optimization techniques can be applied to generate a reliable dataset constituting of symptoms that can be useful for more accurate diagnosis of diabetes. This study presents the implementation of a new hybrid attribute optimization algorithm called Enhanced and Adaptive Genetic Algorithm (EAGA) to get an optimized symptoms dataset. Based on readings of symptoms in the optimized dataset obtained, a possible occurrence of diabetes is forecasted. EAGA model is further used with Multilayer Perceptron (MLP) to determine the presence or absence of type 2 diabetes in patients based on the symptoms detected. The proposed classification approach was named as Enhanced and Adaptive-Genetic Algorithm-Multilayer Perceptron (EAGA-MLP). It is also implemented on seven different disease datasets to assess its impact and effectiveness. Performance of the proposed model was validated against some vital performance metrics. The results show a maximum accuracy rate of 97.76% and 1.12 s of execution time. Furthermore, the proposed model presents an F-Score value of 86.8% and a precision of 80.2%. The method is compared with many existing studies and it was observed that the classification accuracy of the proposed Enhanced and Adaptive-Genetic Algorithm-Multilayer Perceptron (EAGA-MLP) model clearly outperformed all other previous classification models. Its performance was also tested with seven other disease datasets. The mean accuracy, precision, recall and f-score obtained was 94.7%, 91%, 89.8% and 90.4%, respectively. Thus, the proposed model can assist medical experts in accurately determining risk factors of type 2 diabetes and thereby help in accurately classifying the presence of type 2 diabetes in patients. Consequently, it can be used to support healthcare experts in the diagnosis of patients affected by diabetes. Full article
(This article belongs to the Special Issue Multimodal Data Fusion and Machine-Learning for Healthcare)
Show Figures

Figure 1

Article
Person Re-ID by Fusion of Video Silhouettes and Wearable Signals for Home Monitoring Applications
Sensors 2020, 20(9), 2576; https://doi.org/10.3390/s20092576 - 01 May 2020
Cited by 3 | Viewed by 1458
Abstract
The use of visual sensors for monitoring people in their living environments is critical in processing more accurate health measurements, but their use is undermined by the issue of privacy. Silhouettes, generated from RGB video, can help towards alleviating the issue of privacy [...] Read more.
The use of visual sensors for monitoring people in their living environments is critical in processing more accurate health measurements, but their use is undermined by the issue of privacy. Silhouettes, generated from RGB video, can help towards alleviating the issue of privacy to some considerable degree. However, the use of silhouettes would make it rather complex to discriminate between different subjects, preventing a subject-tailored analysis of the data within a free-living, multi-occupancy home. This limitation can be overcome with a strategic fusion of sensors that involves wearable accelerometer devices, which can be used in conjunction with the silhouette video data, to match video clips to a specific patient being monitored. The proposed method simultaneously solves the problem of Person ReID using silhouettes and enables home monitoring systems to employ sensor fusion techniques for data analysis. We develop a multimodal deep-learning detection framework that maps short video clips and accelerations into a latent space where the Euclidean distance can be measured to match video and acceleration streams. We train our method on the SPHERE Calorie Dataset, for which we show an average area under the ROC curve of 76.3% and an assignment accuracy of 77.4%. In addition, we propose a novel triplet loss for which we demonstrate improving performances and convergence speed. Full article
(This article belongs to the Special Issue Multimodal Data Fusion and Machine-Learning for Healthcare)
Show Figures

Figure 1

Article
Foveation Pipeline for 360° Video-Based Telemedicine
Sensors 2020, 20(8), 2264; https://doi.org/10.3390/s20082264 - 16 Apr 2020
Cited by 2 | Viewed by 1310
Abstract
Pan-tilt-zoom (PTZ) and omnidirectional cameras serve as a video-mediated communication interface for telemedicine. Most cases use either PTZ or omnidirectional cameras exclusively; even when used together, images from the two are shown separately on 2D displays. Conventional foveated imaging techniques may offer a [...] Read more.
Pan-tilt-zoom (PTZ) and omnidirectional cameras serve as a video-mediated communication interface for telemedicine. Most cases use either PTZ or omnidirectional cameras exclusively; even when used together, images from the two are shown separately on 2D displays. Conventional foveated imaging techniques may offer a solution for exploiting the benefits of both cameras, i.e., the high resolution of the PTZ camera and the wide field-of-view of the omnidirectional camera, but displaying the unified image on a 2D display would reduce the benefit of “omni-” directionality. In this paper, we introduce a foveated imaging pipeline designed to support virtual reality head-mounted displays (HMDs). The pipeline consists of two parallel processes: one for estimating parameters for the integration of the two images and another for rendering images in real time. A control mechanism for placing the foveal region (i.e., high-resolution area) in the scene and zooming is also proposed. Our evaluations showed that the proposed pipeline achieved, on average, 17 frames per second when rendering the foveated view on an HMD, and showed angular resolution improvement on the foveal region compared with the omnidirectional camera view. However, the improvement was less significant when the zoom level was 8× and more. We discuss possible improvement points and future research directions. Full article
(This article belongs to the Special Issue Multimodal Data Fusion and Machine-Learning for Healthcare)
Show Figures

Figure 1

Article
Detection of Atrial Fibrillation Using 1D Convolutional Neural Network
Sensors 2020, 20(7), 2136; https://doi.org/10.3390/s20072136 - 10 Apr 2020
Cited by 27 | Viewed by 2531
Abstract
The automatic detection of atrial fibrillation (AF) is crucial for its association with the risk of embolic stroke. Most of the existing AF detection methods usually convert 1D time-series electrocardiogram (ECG) signal into 2D spectrogram to train a complex AF detection system, which [...] Read more.
The automatic detection of atrial fibrillation (AF) is crucial for its association with the risk of embolic stroke. Most of the existing AF detection methods usually convert 1D time-series electrocardiogram (ECG) signal into 2D spectrogram to train a complex AF detection system, which results in heavy training computation and high implementation cost. This paper proposes an AF detection method based on an end-to-end 1D convolutional neural network (CNN) architecture to raise the detection accuracy and reduce network complexity. By investigating the impact of major components of a convolutional block on detection accuracy and using grid search to obtain optimal hyperparameters of the CNN, we develop a simple, yet effective 1D CNN. Since the dataset provided by PhysioNet Challenge 2017 contains ECG recordings with different lengths, we also propose a length normalization algorithm to generate equal-length records to meet the requirement of CNN. Experimental results and analysis indicate that our method of 1D CNN achieves an average F1 score of 78.2%, which has better detection accuracy with lower network complexity, as compared with the existing deep learning-based methods. Full article
(This article belongs to the Special Issue Multimodal Data Fusion and Machine-Learning for Healthcare)
Show Figures

Figure 1

Article
Investigation of Machine Learning Approaches for Traumatic Brain Injury Classification via EEG Assessment in Mice
Sensors 2020, 20(7), 2027; https://doi.org/10.3390/s20072027 - 04 Apr 2020
Cited by 9 | Viewed by 1990
Abstract
Due to the difficulties and complications in the quantitative assessment of traumatic brain injury (TBI) and its increasing relevance in today’s world, robust detection of TBI has become more significant than ever. In this work, we investigate several machine learning approaches to assess [...] Read more.
Due to the difficulties and complications in the quantitative assessment of traumatic brain injury (TBI) and its increasing relevance in today’s world, robust detection of TBI has become more significant than ever. In this work, we investigate several machine learning approaches to assess their performance in classifying electroencephalogram (EEG) data of TBI in a mouse model. Algorithms such as decision trees (DT), random forest (RF), neural network (NN), support vector machine (SVM), K-nearest neighbors (KNN) and convolutional neural network (CNN) were analyzed based on their performance to classify mild TBI (mTBI) data from those of the control group in wake stages for different epoch lengths. Average power in different frequency sub-bands and alpha:theta power ratio in EEG were used as input features for machine learning approaches. Results in this mouse model were promising, suggesting similar approaches may be applicable to detect TBI in humans in practical scenarios. Full article
(This article belongs to the Special Issue Multimodal Data Fusion and Machine-Learning for Healthcare)
Show Figures

Figure 1

Article
Using Class-Specific Feature Selection for Cancer Detection with Gene Expression Profile Data of Platelets
Sensors 2020, 20(5), 1528; https://doi.org/10.3390/s20051528 - 10 Mar 2020
Cited by 2 | Viewed by 1202
Abstract
A novel multi-classification method, which integrates the elastic net and probabilistic support vector machine, was proposed to solve this problem in cancer detection with gene expression profile data of platelets, whose problems mainly are a kind of multi-class classification problem with high dimension, [...] Read more.
A novel multi-classification method, which integrates the elastic net and probabilistic support vector machine, was proposed to solve this problem in cancer detection with gene expression profile data of platelets, whose problems mainly are a kind of multi-class classification problem with high dimension, small samples, and collinear data. The strategy of one-against-all (OVA) was employed to decompose the multi-classification problem into a series of binary classification problems. The elastic net was used to select class-specific features for the binary classification problems, and the probabilistic support vector machine was used to make the outputs of the binary classifiers with class-specific features comparable. Simulation data and gene expression profile data were intended to verify the effectiveness of the proposed method. Results indicate that the proposed method can automatically select class-specific features and obtain better performance of classification than that of the conventional multi-class classification methods, which are mainly based on global feature selection methods. This study indicates the proposed method is suitable for general multi-classification problems featured with high-dimension, small samples, and collinear data. Full article
(This article belongs to the Special Issue Multimodal Data Fusion and Machine-Learning for Healthcare)
Show Figures

Graphical abstract

Article
Machine Learning Techniques Applied to Dose Prediction in Computed Tomography Tests
Sensors 2019, 19(23), 5116; https://doi.org/10.3390/s19235116 - 22 Nov 2019
Cited by 3 | Viewed by 2473
Abstract
Increasingly more patients exposed to radiation from computed axial tomography (CT) will have a greater risk of developing tumors or cancer that are caused by cell mutation in the future. A minor dose level would decrease the number of these possible cases. However, [...] Read more.
Increasingly more patients exposed to radiation from computed axial tomography (CT) will have a greater risk of developing tumors or cancer that are caused by cell mutation in the future. A minor dose level would decrease the number of these possible cases. However, this framework can result in medical specialists (radiologists) not being able to detect anomalies or lesions. This work explores a way of addressing these concerns, achieving the reduction of unnecessary radiation without compromising the diagnosis. We contribute with a novel methodology in the CT area to predict the precise radiation that a patient should be given to accomplish this goal. Specifically, from a real dataset composed of the dose data of over fifty thousand patients that have been classified into standardized protocols (skull, abdomen, thorax, pelvis, etc.), we eliminate atypical information (outliers), to later generate regression curves employing diverse well-known Machine Learning techniques. As a result, we have chosen the best analytical technique per protocol; a selection that was thoroughly carried out according to traditional dosimetry parameters to accurately quantify the dose level that the radiologist should apply in each CT test. Full article
(This article belongs to the Special Issue Multimodal Data Fusion and Machine-Learning for Healthcare)
Show Figures

Graphical abstract

Article
Machine-Learning-Based Detection of Craving for Gaming Using Multimodal Physiological Signals: Validation of Test-Retest Reliability for Practical Use
Sensors 2019, 19(16), 3475; https://doi.org/10.3390/s19163475 - 09 Aug 2019
Cited by 4 | Viewed by 1492
Abstract
Internet gaming disorder in adolescents and young adults has become an increasing public concern because of its high prevalence rate and potential risk of alteration of brain functions and organizations. Cue exposure therapy is designed for reducing or maintaining craving, a core factor [...] Read more.
Internet gaming disorder in adolescents and young adults has become an increasing public concern because of its high prevalence rate and potential risk of alteration of brain functions and organizations. Cue exposure therapy is designed for reducing or maintaining craving, a core factor of relapse of addiction, and is extensively employed in addiction treatment. In a previous study, we proposed a machine-learning-based method to detect craving for gaming using multimodal physiological signals including photoplethysmogram, galvanic skin response, and electrooculogram. Our previous study demonstrated that a craving for gaming could be detected with a fairly high accuracy; however, as the feature vectors for the machine-learning-based detection of the craving of a user were selected based on the physiological data of the user that were recorded on the same day, the effectiveness of the reuse of the machine learning model constructed during the previous experiments, without any further calibration sessions, was still questionable. This “high test-retest reliability” characteristic is of importance for the practical use of the craving detection system because the system needs to be repeatedly applied to the treatment processes as a tool to monitor the efficacy of the treatment. We presented short video clips of three addictive games to nine participants, during which various physiological signals were recorded. This experiment was repeated with different video clips on three different days. Initially, we investigated the test-retest reliability of 14 features used in a craving detection system by computing the intraclass correlation coefficient. Then, we classified whether each participant experienced a craving for gaming in the third experiment using various classifiers—the support vector machine, k-nearest neighbors (kNN), centroid displacement-based kNN, linear discriminant analysis, and random forest—trained with the physiological signals recorded during the first or second experiment. Consequently, the craving/non-craving states in the third experiment were classified with an accuracy that was comparable to that achieved using the data of the same day; thus, demonstrating a high test-retest reliability and the practicality of our craving detection method. In addition, the classification performance was further enhanced by using both datasets of the first and second experiments to train the classifiers, suggesting that an individually customized game craving detection system with high accuracy can be implemented by accumulating datasets recorded on different days under different experimental conditions. Full article
(This article belongs to the Special Issue Multimodal Data Fusion and Machine-Learning for Healthcare)
Show Figures

Figure 1

Back to TopTop