sensors-logo

Journal Browser

Journal Browser

Special Issue "Deep Learning in Biomedical Informatics and Healthcare"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Biomedical Sensors".

Deadline for manuscript submissions: 31 October 2021.

Special Issue Editors

Dr. Gianluca Borghini
E-Mail Website
Guest Editor
Department of Molecular Medicine, Sapienza Università di Roma, Rome, Italy
Interests: cognitive neuroscience; machine learning; neuroscience; signal processing
Special Issues and Collections in MDPI journals
Dr. Gianluca Di Flumeri
E-Mail Website
Co-Guest Editor
Department of Molecular Medicine, Sapienza University of Rome, 00185 Rome, Italy
Interests: brain activity; cognitive neuroscience; EEG; signal processing; brain computer interface
Special Issues and Collections in MDPI journals
Dr. Nicolina Sciaraffa
E-Mail Website
Co-Guest Editor
Department of Molecular Medicine, Sapienza University of Rome, 00185 Rome, Italy
Interests: Neuroimaging; passive Brain-Computer Interface; Human Factors; Machine Learning; Applied Neuroscience; Cooperation
Dr. Mobyen Uddin Ahmed
E-Mail Website
Co-Guest Editor
Associate professor, School of Innovation Design and Engineering (IDT), Mälardalen University, 72220 Västerås, Sweden
Interests: His current research interest includes deep learning, case-based reasoning, data mining, fuzzy logic and other machine learning and machine intelligence approaches for analytics especially in Big data
Dr. Manousos Klados
E-Mail Website
Co-Guest Editor
Department of Psychology, International Faculty of the University of Sheffield, CITY College, 54626 Thessaloniki, Greece
Interests: brain networks; affective and personality neuroscience; applied neuroscience; EEG signal processing; machine learning; graph theory
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

Artificial intelligence (AI) is already part of our everyday lives, and over the past few years, AI has exploded. Much of that has to do with the wide availability of GPUs that make parallel processing ever faster, cheaper, and more powerful. Deep learning (DL) has enabled many practical applications of machine learning and by extension the overall field of AI. The concept of DL has been applied to numerous research areas, such as mental states prediction and classification, image/speech recognition, vision, and predictive healthcare. The main advantage of DL algorithms relies on providing a computational model of a large dataset by learning and representing data at multiple levels. Therefore, deep learning models are able to give intuitions to understand the complex structures of large dataset. The employment of DL algorithms in controlled settings and datasets has been widely demonstrated and recognized, but realistic healthcare contexts may present some issues and limitations, for example, the invasiveness and cost of the biomedical signals recording systems, to achieve high performance. In this regard, the aim of the Special Issue is to collect the latest DL algorithms and applications to be applied in everyday life, contexts, and various research areas, in which biomedical signals, for example, Electroencephalogram (EEG), Electrocardiogram (ECG), and Galvanic Skin Response (GSR), are considered and eventually combined for the user’s mental states monitoring while dealing with realistic tasks (i.e., passive BCI), the adaptation of Human–Machine Interactions (HMIs), and for an objective and comprehensive user’s wellbeing assessment such as remote and predictive healthcare applications.

Areas covered by this section include but are not limited to the following:

  • Modelling and assessment of mental states and physical/psychological impairments/disorders;
  • Remote and predictive healthcare;
  • Transfer learning;
  • Human–Machine Interactions (HMIs);
  • Adaptive automation;
  • Human Performance Envelope (HPE);
  • Passive Brain-Computer Interaction (pBCI);
  • Wearable technologies;
  • Multimodality for neurophysiological assessment.

All types of manuscripts are considered, including original basic science reports, translational research, clinical studies, review articles, and methodology papers.

Dr. Gianluca Borghini
Dr. Gianluca Di Flumeri
Dr. Nicolina Sciaraffa
Assoc. Prof. Mobyen Uddin Ahmed
Ass. Prof. Manousos Klados
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • healthcare
  • deep learning
  • transfer learning
  • diagnosis
  • mental states
  • biomedical signal fusion
  • machine learning
  • articificial intelligence
  • adaptive automation
  • passive brain–computer interface

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
Performance Evaluation of Machine Learning Frameworks for Aphasia Assessment
Sensors 2021, 21(8), 2582; https://doi.org/10.3390/s21082582 - 07 Apr 2021
Viewed by 416
Abstract
Speech assessment is an essential part of the rehabilitation procedure for patients with aphasia (PWA). It is a comprehensive and time-consuming process that aims to discriminate between healthy individuals and aphasic patients, determine the type of aphasia syndrome, and determine the patients’ impairment [...] Read more.
Speech assessment is an essential part of the rehabilitation procedure for patients with aphasia (PWA). It is a comprehensive and time-consuming process that aims to discriminate between healthy individuals and aphasic patients, determine the type of aphasia syndrome, and determine the patients’ impairment severity levels (these are referred to here as aphasia assessment tasks). Hence, the automation of aphasia assessment tasks is essential. In this study, the performance of three automatic speech assessment models based on the speech dataset-type was investigated. Three types of datasets were used: healthy subjects’ dataset, aphasic patients’ dataset, and a combination of healthy and aphasic datasets. Two machine learning (ML)-based frameworks, classical machine learning (CML) and deep neural network (DNN), were considered in the design of the proposed speech assessment models. In this paper, the DNN-based framework was based on a convolutional neural network (CNN). Direct or indirect transformation of these models to achieve the aphasia assessment tasks was investigated. Comparative performance results for each of the speech assessment models showed that quadrature-based high-resolution time-frequency images with a CNN framework outperformed all the CML frameworks over the three dataset-types. The CNN-based framework reported an accuracy of 99.23 ± 0.003% with the healthy individuals’ dataset and 67.78 ± 0.047% with the aphasic patients’ dataset. Moreover, direct or transformed relationships between the proposed speech assessment models and the aphasia assessment tasks are attainable, given a suitable dataset-type, a reasonably sized dataset, and appropriate decision logic in the ML framework. Full article
(This article belongs to the Special Issue Deep Learning in Biomedical Informatics and Healthcare)
Show Figures

Figure 1

Article
A Video-Based Technique for Heart Rate and Eye Blinks Rate Estimation: A Potential Solution for Telemonitoring and Remote Healthcare
Sensors 2021, 21(5), 1607; https://doi.org/10.3390/s21051607 - 25 Feb 2021
Viewed by 545
Abstract
Current telemedicine and remote healthcare applications foresee different interactions between the doctor and the patient relying on the use of commercial and medical wearable sensors and internet-based video conferencing platforms. Nevertheless, the existing applications necessarily require a contact between the patient and sensors [...] Read more.
Current telemedicine and remote healthcare applications foresee different interactions between the doctor and the patient relying on the use of commercial and medical wearable sensors and internet-based video conferencing platforms. Nevertheless, the existing applications necessarily require a contact between the patient and sensors for an objective evaluation of the patient’s state. The proposed study explored an innovative video-based solution for monitoring neurophysiological parameters of potential patients and assessing their mental state. In particular, we investigated the possibility to estimate the heart rate (HR) and eye blinks rate (EBR) of participants while performing laboratory tasks by mean of facial—video analysis. The objectives of the study were focused on: (i) assessing the effectiveness of the proposed technique in estimating the HR and EBR by comparing them with laboratory sensor-based measures and (ii) assessing the capability of the video—based technique in discriminating between the participant’s resting state (Nominal condition) and their active state (Non-nominal condition). The results demonstrated that the HR and EBR estimated through the facial—video technique or the laboratory equipment did not statistically differ (p > 0.1), and that these neurophysiological parameters allowed to discriminate between the Nominal and Non-nominal states (p < 0.02). Full article
(This article belongs to the Special Issue Deep Learning in Biomedical Informatics and Healthcare)
Show Figures

Figure 1

Article
InstanceEasyTL: An Improved Transfer-Learning Method for EEG-Based Cross-Subject Fatigue Detection
Sensors 2020, 20(24), 7251; https://doi.org/10.3390/s20247251 - 17 Dec 2020
Cited by 1 | Viewed by 675
Abstract
Electroencephalogram (EEG) is an effective indicator for the detection of driver fatigue. Due to the significant differences in EEG signals across subjects, and difficulty in collecting sufficient EEG samples for analysis during driving, detecting fatigue across subjects through using EEG signals remains a [...] Read more.
Electroencephalogram (EEG) is an effective indicator for the detection of driver fatigue. Due to the significant differences in EEG signals across subjects, and difficulty in collecting sufficient EEG samples for analysis during driving, detecting fatigue across subjects through using EEG signals remains a challenge. EasyTL is a kind of transfer-learning model, which has demonstrated better performance in the field of image recognition, but not yet been applied in cross-subject EEG-based applications. In this paper, we propose an improved EasyTL-based classifier, the InstanceEasyTL, to perform EEG-based analysis for cross-subject fatigue mental-state detection. Experimental results show that InstanceEasyTL not only requires less EEG data, but also obtains better performance in accuracy and robustness than EasyTL, as well as existing machine-learning models such as Support Vector Machine (SVM), Transfer Component Analysis (TCA), Geodesic Flow Kernel (GFK), and Domain-adversarial Neural Networks (DANN), etc. Full article
(This article belongs to the Special Issue Deep Learning in Biomedical Informatics and Healthcare)
Show Figures

Figure 1

Article
The Probability of Ischaemic Stroke Prediction with a Multi-Neural-Network Model
Sensors 2020, 20(17), 4995; https://doi.org/10.3390/s20174995 - 03 Sep 2020
Cited by 2 | Viewed by 848
Abstract
As is known, cerebral stroke has become one of the main diseases endangering people’s health; ischaemic strokes accounts for approximately 85% of cerebral strokes. According to research, early prediction and prevention can effectively reduce the incidence rate of the disease. However, it is [...] Read more.
As is known, cerebral stroke has become one of the main diseases endangering people’s health; ischaemic strokes accounts for approximately 85% of cerebral strokes. According to research, early prediction and prevention can effectively reduce the incidence rate of the disease. However, it is difficult to predict the ischaemic stroke because the data related to the disease are multi-modal. To achieve high accuracy of prediction and combine the stroke risk predictors obtained by previous researchers, a method for predicting the probability of stroke occurrence based on a multi-model fusion convolutional neural network structure is proposed. In such a way, the accuracy of ischaemic stroke prediction is improved by processing multi-modal data through multiple end-to-end neural networks. In this method, the feature extraction of structured data (age, gender, history of hypertension, etc.) and streaming data (heart rate, blood pressure, etc.) based on a convolutional neural network is first realized. A neural network model for feature fusion is then constructed to realize the feature fusion of structured data and streaming data. Finally, a predictive model for predicting the probability of stroke is obtained by training. As shown in the experimental results, the accuracy of ischaemic stroke prediction reached 98.53%. Such a high prediction accuracy will be helpful for preventing the occurrence of stroke. Full article
(This article belongs to the Special Issue Deep Learning in Biomedical Informatics and Healthcare)
Show Figures

Figure 1

Article
A Robust Multilevel DWT Densely Network for Cardiovascular Disease Classification
Sensors 2020, 20(17), 4777; https://doi.org/10.3390/s20174777 - 24 Aug 2020
Cited by 2 | Viewed by 632
Abstract
Cardiovascular disease is the leading cause of death worldwide. Immediate and accurate diagnoses of cardiovascular disease are essential for saving lives. Although most of the previously reported works have tried to classify heartbeats accurately based on the intra-patient paradigm, they suffer from category [...] Read more.
Cardiovascular disease is the leading cause of death worldwide. Immediate and accurate diagnoses of cardiovascular disease are essential for saving lives. Although most of the previously reported works have tried to classify heartbeats accurately based on the intra-patient paradigm, they suffer from category imbalance issues since abnormal heartbeats appear much less regularly than normal heartbeats. Furthermore, most existing methods rely on data preprocessing steps, such as noise removal and R-peak location. In this study, we present a robust classification system using a multilevel discrete wavelet transform densely network (MDD-Net) for the accurate detection of normal, coronary artery disease (CAD), myocardial infarction (MI) and congestive heart failure (CHF). First, the raw ECG signals from different databases are divided into same-size segments using an original adaptive sample frequency segmentation algorithm (ASFS). Then, the fusion features are extracted from the MDD-Net to achieve great classification performance. We evaluated the proposed method considering the intra-patient and inter-patient paradigms. The average accuracy, positive predictive value, sensitivity and specificity were 99.74%, 99.09%, 98.67% and 99.83%, respectively, under the intra-patient paradigm, and 96.92%, 92.17%, 89.18% and 97.77%, respectively, under the inter-patient paradigm. Moreover, the experimental results demonstrate that our model is robust to noise and class imbalance issues. Full article
(This article belongs to the Special Issue Deep Learning in Biomedical Informatics and Healthcare)
Show Figures

Figure 1

Back to TopTop