sensors-logo

Journal Browser

Journal Browser

Special Issue "Biomedical Signal Processing for Disease Diagnosis"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Biosensors".

Deadline for manuscript submissions: 15 January 2021.

Special Issue Editors

Prof. Dr. Carlos Gómez
Website
Guest Editor
Biomedical Engineering Group, Universidad de Valladolid, Paseo Belén, 15, 47011 Valladolid, Spain
Interests: biomedical signals; signal processing; nonlinear analyses; connectivity measures; electroencephalography; magnetoencephalography
Special Issues and Collections in MDPI journals
Prof. Dr. Raúl Alcaraz
Website
Guest Editor
Research Group in Electronic, Biomedical and Telecommunication Engineering, Universidad de Castilla-La Mancha, Campus Universitario s/n, 16071, Cuenca, Spain
Interests: entropy; complexity; information theory; information geometry; nonlinear dynamics; computational mathematics and statistics in medicine; biomedical time series analysis; cardiac signal processing
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

Nowadays, sensors are integrated in many medical devices in order to record the signals generated by the physiological activity of our body. Biomedical signal processing is an interdisciplinary field where physicians, mathematicians, biologists, and engineers, among others, collaborate to develop and/or apply mathematical methods to extract useful information from the recorded physiological data. This Special Issue aims to attract researchers with interest on the application of signal processing methods to different biomedical signals (electrocardiogram, electroencephalogram, magnetoencephalogram, electromyogram, galvanic skin response, pulse oximetry, photopletismogram, etc.) to help physicians in the diagnosis of human diseases. Original papers that describe new research on this subject are welcomed. We look forward your participation in this Special Issue.

Prof. Dr. Carlos Gómez
Prof. Dr. Raúl Alcaraz
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Biomedical signal processing
  • Biomedical signals (ECG, EEG, PPG, EDR, EMG, etc.)
  • Diseases
  • Aid diagnosis
  • Physiological time series dynamics
  • Physiological redundancy, synergy, complexity, and connectivity
  • Linear and non-linear data processing
  • Time, frequency, and time–frequency analyses

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

Open AccessArticle
An Automated System for Classification of Chronic Obstructive Pulmonary Disease and Pneumonia Patients Using Lung Sound Analysis
Sensors 2020, 20(22), 6512; https://doi.org/10.3390/s20226512 - 14 Nov 2020
Abstract
Chronic obstructive pulmonary disease (COPD) and pneumonia are two of the few fatal lung diseases which share common adventitious lung sounds. Diagnosing the disease from lung sound analysis to design a noninvasive technique for telemedicine is a challenging task. A novel framework is [...] Read more.
Chronic obstructive pulmonary disease (COPD) and pneumonia are two of the few fatal lung diseases which share common adventitious lung sounds. Diagnosing the disease from lung sound analysis to design a noninvasive technique for telemedicine is a challenging task. A novel framework is presented to perform a diagnosis of COPD and Pneumonia via application of the signal processing and machine learning approach. This model will help the pulmonologist to accurately detect disease A and B. COPD, normal and pneumonia lung sound (LS) data from the ICBHI respiratory database is used in this research. The performance analysis is evidence of the improved performance of the quadratic discriminate classifier with an accuracy of 99.70% on selected fused features after experimentation. The fusion of time domain, cepstral, and spectral features are employed. Feature selection for fusion is performed through the back-elimination method whereas empirical mode decomposition (EMD) and discrete wavelet transform (DWT)-based techniques are used to denoise and segment the pulmonic signal. Class imbalance is catered with the implementation of the adaptive synthetic (ADASYN) sampling technique. Full article
(This article belongs to the Special Issue Biomedical Signal Processing for Disease Diagnosis)
Show Figures

Figure 1

Open AccessArticle
Relationship between the Presence of the ApoE ε4 Allele and EEG Complexity along the Alzheimer’s Disease Continuum
Sensors 2020, 20(14), 3849; https://doi.org/10.3390/s20143849 - 10 Jul 2020
Abstract
Alzheimer’s disease (AD) is the most prevalent cause of dementia, being considered a major health problem, especially in developed countries. Late-onset AD is the most common form of the disease, with symptoms appearing after 65 years old. Genetic determinants of AD risk are [...] Read more.
Alzheimer’s disease (AD) is the most prevalent cause of dementia, being considered a major health problem, especially in developed countries. Late-onset AD is the most common form of the disease, with symptoms appearing after 65 years old. Genetic determinants of AD risk are vastly unknown, though, ε 4 allele of the ApoE gene has been reported as the strongest genetic risk factor for AD. The objective of this study was to analyze the relationship between brain complexity and the presence of ApoE ε 4 alleles along the AD continuum. For this purpose, resting-state electroencephalography (EEG) activity was analyzed by computing Lempel-Ziv complexity (LZC) from 46 healthy control subjects, 49 mild cognitive impairment subjects, 45 mild AD patients, 44 moderate AD patients and 33 severe AD patients, subdivided by ApoE status. Subjects with one or more ApoE ε 4 alleles were included in the carriers subgroups, whereas the ApoE ε 4 non-carriers subgroups were formed by subjects without any ε 4 allele. Our results showed that AD continuum is characterized by a progressive complexity loss. No differences were observed between AD ApoE ε 4 carriers and non-carriers. However, brain activity from healthy subjects with ApoE ε 4 allele (carriers subgroup) is more complex than from non-carriers, mainly in left temporal, frontal and posterior regions (p-values < 0.05, FDR-corrected Mann–Whitney U-test). These results suggest that the presence of ApoE ε 4 allele could modify the EEG complexity patterns in different brain regions, as the temporal lobes. These alterations might be related to anatomical changes associated to neurodegeneration, increasing the risk of suffering dementia due to AD before its clinical onset. This interesting finding might help to advance in the development of new tools for early AD diagnosis. Full article
(This article belongs to the Special Issue Biomedical Signal Processing for Disease Diagnosis)
Show Figures

Figure 1

Open AccessArticle
Evaluation of Deep Neural Networks for Semantic Segmentation of Prostate in T2W MRI
Sensors 2020, 20(11), 3183; https://doi.org/10.3390/s20113183 - 03 Jun 2020
Cited by 1
Abstract
In this paper, we present an evaluation of four encoder–decoder CNNs in the segmentation of the prostate gland in T2W magnetic resonance imaging (MRI) image. The four selected CNNs are FCN, SegNet, U-Net, and DeepLabV3+, which was originally proposed for the segmentation of [...] Read more.
In this paper, we present an evaluation of four encoder–decoder CNNs in the segmentation of the prostate gland in T2W magnetic resonance imaging (MRI) image. The four selected CNNs are FCN, SegNet, U-Net, and DeepLabV3+, which was originally proposed for the segmentation of road scene, biomedical, and natural images. Segmentation of prostate in T2W MRI images is an important step in the automatic diagnosis of prostate cancer to enable better lesion detection and staging of prostate cancer. Therefore, many research efforts have been conducted to improve the segmentation of the prostate gland in MRI images. The main challenges of prostate gland segmentation are blurry prostate boundary and variability in prostate anatomical structure. In this work, we investigated the performance of encoder–decoder CNNs for segmentation of prostate gland in T2W MRI. Image pre-processing techniques including image resizing, center-cropping and intensity normalization are applied to address the issues of inter-patient and inter-scanner variability as well as the issue of dominating background pixels over prostate pixels. In addition, to enrich the network with more data, to increase data variation, and to improve its accuracy, patch extraction and data augmentation are applied prior to training the networks. Furthermore, class weight balancing is used to avoid having biased networks since the number of background pixels is much higher than the prostate pixels. The class imbalance problem is solved by utilizing weighted cross-entropy loss function during the training of the CNN model. The performance of the CNNs is evaluated in terms of the Dice similarity coefficient (DSC) and our experimental results show that patch-wise DeepLabV3+ gives the best performance with DSC equal to 92.8 % . This value is the highest DSC score compared to the FCN, SegNet, and U-Net that also competed the recently published state-of-the-art method of prostate segmentation. Full article
(This article belongs to the Special Issue Biomedical Signal Processing for Disease Diagnosis)
Show Figures

Figure 1

Open AccessArticle
Exploration of User’s Mental State Changes during Performing Brain–Computer Interface
Sensors 2020, 20(11), 3169; https://doi.org/10.3390/s20113169 - 03 Jun 2020
Cited by 2
Abstract
Substantial developments have been established in the past few years for enhancing the performance of brain–computer interface (BCI) based on steady-state visual evoked potential (SSVEP). The past SSVEP-BCI studies utilized different target frequencies with flashing stimuli in many different applications. However, it is [...] Read more.
Substantial developments have been established in the past few years for enhancing the performance of brain–computer interface (BCI) based on steady-state visual evoked potential (SSVEP). The past SSVEP-BCI studies utilized different target frequencies with flashing stimuli in many different applications. However, it is not easy to recognize user’s mental state changes when performing the SSVEP-BCI task. What we could observe was the increasing EEG power of the target frequency from the user’s visual area. BCI user’s cognitive state changes, especially in mental focus state or lost-in-thought state, will affect the BCI performance in sustained usage of SSVEP. Therefore, how to differentiate BCI users’ physiological state through exploring their neural activities changes while performing SSVEP is a key technology for enhancing the BCI performance. In this study, we designed a new BCI experiment which combined working memory task into the flashing targets of SSVEP task using 12 Hz or 30 Hz frequencies. Through exploring the EEG activity changes corresponding to the working memory and SSVEP task performance, we can recognize if the user’s cognitive state is in mental focus or lost-in-thought. Experiment results show that the delta (1–4 Hz), theta (4–7 Hz), and beta (13–30 Hz) EEG activities increased more in mental focus than in lost-in-thought state at the frontal lobe. In addition, the powers of the delta (1–4 Hz), alpha (8–12 Hz), and beta (13–30 Hz) bands increased more in mental focus in comparison with the lost-in-thought state at the occipital lobe. In addition, the average classification performance across subjects for the KNN and the Bayesian network classifiers were observed as 77% to 80%. These results show how mental state changes affect the performance of BCI users. In this work, we developed a new scenario to recognize the user’s cognitive state during performing BCI tasks. These findings can be used as the novel neural markers in future BCI developments. Full article
(This article belongs to the Special Issue Biomedical Signal Processing for Disease Diagnosis)
Show Figures

Figure 1

Open AccessArticle
EEG Signal Analysis for Diagnosing Neurological Disorders Using Discrete Wavelet Transform and Intelligent Techniques
Sensors 2020, 20(9), 2505; https://doi.org/10.3390/s20092505 - 28 Apr 2020
Cited by 4
Abstract
Analysis of electroencephalogram (EEG) signals is essential because it is an efficient method to diagnose neurological brain disorders. In this work, a single system is developed to diagnose one or two neurological diseases at the same time (two-class mode and three-class mode). For [...] Read more.
Analysis of electroencephalogram (EEG) signals is essential because it is an efficient method to diagnose neurological brain disorders. In this work, a single system is developed to diagnose one or two neurological diseases at the same time (two-class mode and three-class mode). For this purpose, different EEG feature-extraction and classification techniques are investigated to aid in the accurate diagnosis of neurological brain disorders: epilepsy and autism spectrum disorder (ASD). Two different modes, single-channel and multi-channel, of EEG signals are analyzed for epilepsy and ASD. The independent components analysis (ICA) technique is used to remove the artifacts from EEG dataset. Then, the EEG dataset is segmented and filtered to remove noise and interference using an elliptic band-pass filter. Next, the EEG signal features are extracted from the filtered signal using a discrete wavelet transform (DWT) to decompose the filtered signal to its sub-bands delta, theta, alpha, beta and gamma. Subsequently, five statistical methods are used to extract features from the EEG sub-bands: the logarithmic band power (LBP), standard deviation, variance, kurtosis, and Shannon entropy (SE). Further, the features are fed into four different classifiers, linear discriminant analysis (LDA), support vector machine (SVM), k-nearest neighbor (KNN), and artificial neural networks (ANNs), to classify the features corresponding to their classes. The combination of DWT with SE and LBP produces the highest accuracy among all the classifiers. The overall classification accuracy approaches 99.9% using SVM and 97% using ANN for the three-class single-channel and multi-channel modes, respectively. Full article
(This article belongs to the Special Issue Biomedical Signal Processing for Disease Diagnosis)
Show Figures

Figure 1

Open AccessArticle
Arrhythmia Diagnosis by Using Level-Crossing ECG Sampling and Sub-Bands Features Extraction for Mobile Healthcare
Sensors 2020, 20(8), 2252; https://doi.org/10.3390/s20082252 - 16 Apr 2020
Cited by 1
Abstract
Mobile healthcare is an emerging technique for clinical applications. It is usually based on cloud-connected biomedical implants. In this context, a novel solution is presented for the detection of arrhythmia by using electrocardiogram (ECG) signals. The aim is to achieve an effective solution [...] Read more.
Mobile healthcare is an emerging technique for clinical applications. It is usually based on cloud-connected biomedical implants. In this context, a novel solution is presented for the detection of arrhythmia by using electrocardiogram (ECG) signals. The aim is to achieve an effective solution by using real-time compression, efficient signal processing, and data transmission. The system utilizes level-crossing-based ECG signal sampling, adaptive-rate denoising, and wavelet-based sub-band decomposition. Statistical features are extracted from the sub-bands and used for automated arrhythmia classification. The performance of the system was studied by using five classes of arrhythmia, obtained from the MIT-BIH dataset. Experimental results showed a three-fold decrease in the number of collected samples compared to conventional counterparts. This resulted in a significant reduction of the computational cost of the post denoising, features extraction, and classification. Moreover, a seven-fold reduction was achieved in the amount of data that needed to be transmitted to the cloud. This resulted in a notable reduction in the transmitter power consumption, bandwidth usage, and cloud application processing load. Finally, the performance of the system was also assessed in terms of the arrhythmia classification, achieving an accuracy of 97%. Full article
(This article belongs to the Special Issue Biomedical Signal Processing for Disease Diagnosis)
Show Figures

Figure 1

Open AccessArticle
Emphasis Learning, Features Repetition in Width Instead of Length to Improve Classification Performance: Case Study—Alzheimer’s Disease Diagnosis
Sensors 2020, 20(3), 941; https://doi.org/10.3390/s20030941 - 10 Feb 2020
Abstract
In the past decade, many studies have been conducted to advance computer-aided systems for Alzheimer’s disease (AD) diagnosis. Most of them have recently developed systems concentrated on extracting and combining features from MRI, PET, and CSF. For the most part, they have obtained [...] Read more.
In the past decade, many studies have been conducted to advance computer-aided systems for Alzheimer’s disease (AD) diagnosis. Most of them have recently developed systems concentrated on extracting and combining features from MRI, PET, and CSF. For the most part, they have obtained very high performance. However, improving the performance of a classification problem is complicated, specifically when the model’s accuracy or other performance measurements are higher than 90%. In this study, a novel methodology is proposed to address this problem, specifically in Alzheimer’s disease diagnosis classification. This methodology is the first of its kind in the literature, based on the notion of replication on the feature space instead of the traditional sample space. Briefly, the main steps of the proposed method include extracting, embedding, and exploring the best subset of features. For feature extraction, we adopt VBM-SPM; for embedding features, a concatenation strategy is used on the features to ultimately create one feature vector for each subject. Principal component analysis is applied to extract new features, forming a low-dimensional compact space. A novel process is applied by replicating selected components, assessing the classification model, and repeating the replication until performance divergence or convergence. The proposed method aims to explore most significant features and highest-preforming model at the same time, to classify normal subjects from AD and mild cognitive impairment (MCI) patients. In each epoch, a small subset of candidate features is assessed by support vector machine (SVM) classifier. This repeating procedure is continued until the highest performance is achieved. Experimental results reveal the highest performance reported in the literature for this specific classification problem. We obtained a model with accuracies of 98.81%, 81.61%, and 81.40% for AD vs. normal control (NC), MCI vs. NC, and AD vs. MCI classification, respectively. Full article
(This article belongs to the Special Issue Biomedical Signal Processing for Disease Diagnosis)
Show Figures

Figure 1

Open AccessArticle
A Novel Approach for Multi-Lead ECG Classification Using DL-CCANet and TL-CCANet
Sensors 2019, 19(14), 3214; https://doi.org/10.3390/s19143214 - 21 Jul 2019
Cited by 3
Abstract
Cardiovascular disease (CVD) has become one of the most serious diseases that threaten human health. Over the past decades, over 150 million humans have died of CVDs. Hence, timely prediction of CVDs is especially important. Currently, deep learning algorithm-based CVD diagnosis methods are [...] Read more.
Cardiovascular disease (CVD) has become one of the most serious diseases that threaten human health. Over the past decades, over 150 million humans have died of CVDs. Hence, timely prediction of CVDs is especially important. Currently, deep learning algorithm-based CVD diagnosis methods are extensively employed, however, most such algorithms can only utilize one-lead ECGs. Hence, the potential information in other-lead ECGs was not utilized. To address this issue, we have developed novel methods for diagnosing arrhythmia. In this work, DL-CCANet and TL-CCANet are proposed to extract abstract discriminating features from dual-lead and three-lead ECGs, respectively. Then, the linear support vector machine specializing in high-dimensional features is used as the classifier model. On the MIT-BIH database, a 95.2% overall accuracy is obtained by detecting 15 types of heartbeats using DL-CCANet. On the INCART database, overall accuracies of 94.01% (II and V1 leads), 93.90% (V1 and V5 leads) and 94.07% (II and V5 leads) are achieved by detecting seven types of heartbeat using DL-CCANet, while TL-CCANet yields a higher overall accuracy of 95.52% using the above three leads. In addition, all of the above experiments are implemented using noisy ECG data. The proposed methods have potential to be applied in the clinic and mobile devices. Full article
(This article belongs to the Special Issue Biomedical Signal Processing for Disease Diagnosis)
Show Figures

Figure 1

Review

Jump to: Research

Open AccessReview
Assessment of Human Visual Acuity Using Visual Evoked Potential: A Review
Sensors 2020, 20(19), 5542; https://doi.org/10.3390/s20195542 - 28 Sep 2020
Abstract
Visual evoked potential (VEP) has been used as an alternative method to assess visual acuity objectively, especially in non-verbal infants and adults with low intellectual abilities or malingering. By sweeping the spatial frequency of visual stimuli and recording the corresponding VEP, VEP acuity [...] Read more.
Visual evoked potential (VEP) has been used as an alternative method to assess visual acuity objectively, especially in non-verbal infants and adults with low intellectual abilities or malingering. By sweeping the spatial frequency of visual stimuli and recording the corresponding VEP, VEP acuity can be defined by analyzing electroencephalography (EEG) signals. This paper presents a review on the VEP-based visual acuity assessment technique, including a brief overview of the technique, the effects of the parameters of visual stimuli, and signal acquisition and analysis of the VEP acuity test, and a summary of the current clinical applications of the technique. Finally, we discuss the current problems in this research domain and potential future work, which may enable this technique to be used more widely and quickly, deepening the VEP and even electrophysiology research on the detection and diagnosis of visual function. Full article
(This article belongs to the Special Issue Biomedical Signal Processing for Disease Diagnosis)
Show Figures

Figure 1

Back to TopTop