Next Article in Journal
An Efficient, Anonymous and Robust Authentication Scheme for Smart Home Environments
Next Article in Special Issue
Feasibility of Social-Network-Based eHealth Intervention on the Improvement of Healthy Habits among Children
Previous Article in Journal
An Identity Authentication Method of a MIoT Device Based on Radio Frequency (RF) Fingerprint Technology
 
 
Article

Detecting Respiratory Pathologies Using Convolutional Neural Networks and Variational Autoencoders for Unbalancing Data

1
SECOMUCI Research Groups, Escuela de Ingenierías Industrial e Informática, Universidad de León, Campus de Vegazana s/n, C.P. 24071 León, Spain
2
SALBIS Research Group, Department of Electric, Systems and Automatics Engineering, Universidad de León, Campus of Vegazana s/n, 24071 León, Spain
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(4), 1214; https://doi.org/10.3390/s20041214
Received: 20 January 2020 / Revised: 16 February 2020 / Accepted: 21 February 2020 / Published: 22 February 2020
(This article belongs to the Special Issue Sensor and Systems Evaluation for Telemedicine and eHealth)

Abstract

The aim of this paper was the detection of pathologies through respiratory sounds. The ICBHI (International Conference on Biomedical and Health Informatics) Benchmark was used. This dataset is composed of 920 sounds of which 810 are of chronic diseases, 75 of non-chronic diseases and only 35 of healthy individuals. As more than 88% of the samples of the dataset are from the same class (Chronic), the use of a Variational Convolutional Autoencoder was proposed to generate new labeled data and other well known oversampling techniques after determining that the dataset classes are unbalanced. Once the preprocessing step was carried out, a Convolutional Neural Network (CNN) was used to classify the respiratory sounds into healthy, chronic, and non-chronic disease. In addition, we carried out a more challenging classification trying to distinguish between the different types of pathologies or healthy: URTI, COPD, Bronchiectasis, Pneumonia, and Bronchiolitis. We achieved results up to 0.993 F-Score in the three-label classification and 0.990 F-Score in the more challenging six-class classification.
Keywords: CNN; variational autoencoder; respiratory; lungs; pathologies CNN; variational autoencoder; respiratory; lungs; pathologies

1. Introduction

Today, respiratory pathologies are a common problem all over the world. Although smoking is the most common cause of respiratory pathologies, sometimes they are caused by genetics, as well as environmental exposure [1]. The ICBHI (International Conference on Biomedical and Health Informatics) respiratory dataset [2] includes seven pathologies, such as chronic obstructive pulmonary disease (COPD), asthma, upper respiratory tract infection (URTI), lower respiratory tract infection (LRTI), bronchiectasis, pneumonia, and bronchiolitis.
Chronic obstructive pulmonary disease (COPD) is a chronic pathology which is difficult to detect. The main cause of COPD is smoking [3]. It causes symptoms, including shortness of breath and cough, which are also common in Asthma disease. Furthermore, these symptoms can be interpreted as a simple aging process.
Bronchiectasis is a chronic condition in which the airways of the lungs become abnormally widened. These damaged air passages allow bacteria and mucus to build up and pool in your lungs. This results in frequent infections and blockages of the airways. All these symptoms can be interpreted also as a bronchiolitis or just a cold [4]. The main difference between both diseases is that bronchiolitis most often affects young children and it can be cure, whereas bronchiectasis is a chronic disease.
Upper respiratory tract infection is a non-chronic disease that can happen at any time, but it is more common in the fall and winter. The vast majority of upper respiratory infections are caused by viruses [5]. The symptoms of this disease can be confused with those of pneumonia [1]. Most people with pneumonia can recover in a short time, but for certain people, it can be extremely serious and even life-threatening so the diagnosis is crucial.
Symptoms of lower respiratory tract infections (LRTI) vary and depend on the severity of the infection. Less severe infections can have symptoms similar to those of bronchiectasis or bronchiolitis.
As we can see, the symptoms of all these diseases are very common and can cause a bad diagnosis by the doctor. For all this, it is very interesting to be able to determine the disease using the sound of the breaths without taking into account the rest of the symptoms.

2. Related Work

2.1. Respiratory Sounds Detection

Lung auscultation provides valuable information regarding the respiratory function of the patient, and it is important to analyze respiratory sounds using an algorithm to give support to medical doctors. There are a few methods in the literature to deal with this challenge. Typically, wheezing is found in asthma and chronic obstructive lung diseases. Wheezes can be so loud you can hear it just by standing next to the patient. Crackles, on the other hand, are only heard using a stethoscope, and they are a sign of too much fluid in the lung. Crackles and wheezes are indications of the pathology.
Islam et al. [6] detected asthma pathology by basing their research on the fact that asthma detection from lung sound signals rely on the presence of wheeze. They collected lung sounds from 60 subjects in which the 50% had asthma and using a data acquisition system from four different positions on the back of the chest. For the classification step, ANN (Artificial Neural Networks) and SVM (Support Vector Machine) were used with the best results (93.3%) obtained in the SVM scenario.
Other studies based on the detection of wheezes and crackles [7] used different configurations of a neural network, obtaining results of up to 93% for detecting crackles and 91.7% for wheezes. The same goal is pursued in Reference [8], but the dataset used in this case consists of seven classes: normal, coarse crackle, fine crackle, monophonic wheeze, polyphonic wheeze, squawk, and stridor. The best results were achieved using a Convolutional Neural Network (CNN). Chen et al. [9] proposed ResNet with an OST-based (Optimized S-Transform based) feature map to classify wheeze, crackle, and normal sounds. In detail, three RGB -maps (Red-Green-Blue) of the rescaled feature map is fed into ResNet due to the balance between the depth and performance. The input feature map is passed through three steps of the ResNet structure and finally, the output corresponds to the class (wheeze, crackle, or normal). The results are compared with ResNet-STFT (Short Term Fourier Transform) and ResNet-ST (S-Transform), with the best accuracy achieved using their proposal ResNet-OST.
In Reference [10], the authors propose a methodology to classify the respiratory sounds into wheezes, crackles, both wheezes and crackles, and normal using the same dataset as that used in our work: ICBHI [2]. The procedure consists of a noise suppression step using spectral subtraction followed by a feature extraction process. Hidden Markov Models were used in the classification step obtaining 39.56% using the score metric, defined as the average of sensitivity and specificity. These results are not promising but in Reference [11], Perna et al. proposed a reliable method to classify in healthy, chronic disease, or non-chronic disease based-on wheezes, crackles, or normal sounds using deep learning and, more concretely, recurrent neural networks and again using the ICBHI benchmark [2].
In Reference [12], Jacome et al. also proposed a CNN to deal with respiratory sounds for detecting breathing phase with a 97% of success in inspiration detection and a 87% in expiration.
Early models of RNNs suffered from both exploding and vanishing gradient problems. Long Short Term Memory (LSTM) and Gated Recurrent Unit (GRU) were designed to address the gradient problems successfully. The authors exploited the LSTM and GRU advantages and obtained promising results of up to 91% of the ICBHI Score calculated as the average value of sensitivity and specificity [11].
Deep learning techniques have also been used to detect some kinds of pathologies such as bronchiolitis, URTI, pneumonia, etc., which supposed a more challenging problem than classifying wheezes and crackles. In Reference [13], the authors try to distinguish between pathological and non-pathological voice over the Saarbrücken Voice Database (SVD) using the MultiFocal toolkit for a discriminative calibration and fusion. The authors carry out a feature extraction step, and these features (Mel-frequency cepstral coefficients, harmonics-to-noise ratio, normalized noise energy and glottal-to-noise excitation ratio) are used to train a generative Gaussian Mixture Models (GMM) model [14].

2.2. Deep Learning Techniques

In the literature, many deep learning techniques have been used to resolve all kinds of problem. This, gives us an idea of how useful artificial intelligence is.
Today, one of the most used techniques for all kinds of purposes are autoencoders and CNNs. Sugimoto et al. [15] try to detect myocardial infarction using the ECG (electrocardiogram) information using a CNN. In their experiments, the classification performance was evaluated using 353,640 beats obtained from the ECG data of MI (myocardial infarction) patients and healthy subjects. ECG data was extremely imbalanced, and the minority class, including abnormal ECG data, may not be learned adequately. To solve this problem, the authors proposed to use the convolutional autoencoder in the following way: The CAE model is constructed for each lead and outputs reconstructed input ECG data if normal ECG data is inputted. Otherwise, the waveform is distorted and outputted. After this process, k-Nearest Neighbors (kNN) is used as a classifier.
A CAE (Convolutional autoencoder) is also used in Reference [16] to restore the corrupted laser stripe images of the depth sensor by denoising the data.
In Reference [17], Kao et al. propose a method of classifying Lycopersicons based on three levels of maturity (immature, semi-mature, and mature). Their method includes two artificial neural networks, a convolutional autoencoder (CAE), and a backpropagation neural network. With the first one, the ROI in the Lycopersicon is detected (instead of doing it manually). Then, using the extracted features, the neural network employs self-learning mechanisms to determine Lycopersicon maturity obtaining an accuracy rate of 100%.
A variational autoencoder is used in Reference [18] for video anomaly detection and localization using only normal samples. The method is based on Gaussian Mixture Variational Autoencoder, which can learn the feature representations of the normal samples as a Gaussian Mixture Model trained using deep learning. A Fully Convolutional Network (FCN) is employed for the encoder-decoder structure to preserve relative spatial coordinates between the input image and the output feature map.
In Reference [19], a non-linear surrogate model based on deep learning is proposed using a variational autoencoder with deep convolutional layers and a deep neural network with batch normalization (VAEDC-DNN) for a real-time analysis of the probability of death in toxic gas scenarios.
Advances in indoor positioning technologies can generate large volumes of spatial trajectory data on the occupants. These data can reveal the distribution of the occupants. In Reference [20], the authors propose a method of evaluating similarities in occupant trajectory data using a convolutional autoencoder (CAE).
In Reference [21], deep autoencoders are proposed to produce efficient bimodal features from the audio and visual stream inputs. The authors obtained an average relative reduction of 36.9% for a range of different noisy conditions, and also, a relative reduction of 19.2% for the clean condition in terms of the Phoneme Error Rates (PER) in comparison with the baseline method.
In 2019, variational autoencoders have been widely used to analyze different kind of signals and monitoring them [22,23]. In addition, in Zemouri et al. [24], variational autoencoders have been used for train a model as a 2D visualization tool for partial discharge source classification.
So, today autoencoders and CNNs are widely used in the literature to solve all kinds of problems. We take advantage of both methods and proposed the use of a variational convolutional autoencoder to balance the data, as well as a CNN to carry out the classification step.
For this paper, we proposed a technique to classify healthy, chronic disease, and non-chronic disease and six different pathology classes: Chronic obstructive pulmonary disease (COPD), upper respiratory tract infection (URTI), bronchiectasis, pneumonia, bronchiolitis, and healthy. Our procedure outperforms the state-of-the-art proposals.
The rest of the paper is organized as follows: In Section 3, we describe the methodology, including data preprocessing, data normalization, and data augmentation, using our Variational Convolutional autoencoder and finally data classification using a CNN. Experiments and results are detailed in Section 4, taking into account two types of classification, and finally, we conclude in Section 5.

3. Methods and Materials

3.1. Data Normalization

Data normalization is an important step before carrying out any machine learning strategy. There are multiple alternatives to normalize data. In this paper, we evaluated our data with MinMax normalization, which got the data in the [0,1] range.

3.2. Data Augmentation: Variational AutoEncoder (VAE)

In all fields of research, but more frequently in ehealth, it is very common to have unbalanced data in the datasets. That means that the number of elements (cardinal) of one class is much bigger than all the cardinal of the rest of the classes. To solve this, there are multiples techniques which try to replicate samples of the minority classes. Synthetic Minority Oversampling Technique (SMOTE) [25], Adaptive Synthetic Sampling Method (ADASYN) [26], and Variational Autoencoders (VAE) [27] are some examples of generative methods. We evaluated our dataset with all of these oversampling methods obtaining the best results with VAE, as shown in the results section.
VAE are part of a kind of neural network known as autoencoders. Vanilla autoencoder architecture consists of a number of dense hidden layers with two main peculiarities:
  • One of these hidden layers has a very few neurons (latent space).
  • The output of the vanilla autoencoder tries to replicate the input.
Taking into account these two considerations, when an autoencoder is trained, the net learns to encode data information in its latent space and decode them after that to reconstruct the original data.
However, VAE is a probabilistic model focused on learning the distribution of data to be able to create new samples which would belong to this distribution. Whereas Vanilla autoencoders try to reconstruct the original data, VAE is also trained to learn the distribution of the data. For that reason, the loss function used to train a VAE is made up of two terms: “reconstruction term”, like in the vanilla autoencoder, that tends to make the encoder-decoder work accurately; and a “regularization term” applied over the latent layer that tends to make the distributions created by the encoder close to a standard normal distribution using the Kulback-Leibler divergence (see Equation (1)).
V A E _ L O S S = | | x x ¯ | | 2 + K L [ N ( μ x , σ x ) , N ( 0 , 1 ) ] ,
where x ¯ is the reconstruction of x, and N ( μ x , σ x ) a normal distribution with mean μ x and standard deviation σ x and K L [ p , q ] is the Kulback-Leilber divergence defined in Equation (2):
K L [ p , q ] = p ( x ) log q ( x ) d x + p ( x ) log p ( x ) d x .
In Figure 1a,b, respectively, we show the differences between a variational autoencoder and a vanilla one.

3.3. Data Classification: Convolutional Neural Networks

A CNN is a Deep Learning algorithm which can take in a bi-dimensional input and be able to differentiate it from another by learning filters which extracts complex features from the inputs automatically. A basic modeling of a CNN is represented in Figure 2.
During the training step, each convolution layer learns the filter weights to then produce a feature map. The filter or kernel is sliding over the input and the sum of the convolution generates the feature map.
After a convolution layer, it is common to add a pooling layer. These kinds of layer are use to decrease the number of parameters in the network. This reduces the computational cost and controls overfitting. The most frequent type of pooling is Max-pooling, which takes the maximum value in each window. In order to carry out a classification or a regression problem with the features generated by the convolutional layers, it is necessary to add dense layers at the end of the network.

4. Experiments and Results

4.1. Dataset

The ICBHI (International Conference on Biomedical and Health Informatics) dataset [2] was created by two research teams (Greece and Portugal), and it includes 920 recordings acquired from 126 subjects. A total of 6898 respiration cycles and 5.5 h of sound was recorded. One thousand, eight hundred and sixty-four of these 6898 respiration cycles were labeled as crackles; 886 contain wheezes and 506 contain both crackles and wheezes. Crackles and wheezes were labeled by experts in the field. Respiratory sounds were recorder from seven different chest locations: trachea, left and right anterior, left and right posterior, and left and right lateral (see Figure 3). High noise levels were included to simulate real situations obtaining a challenging dataset.
Respiratory sounds were recorded from patients with chronic obstructive pulmonary disease (COPD), asthma, upper respiratory tract infection (URTI), lower respiratory tract infection (LRTI), bronchiectasis, pneumonia, and bronchiolitis.

4.1.1. Image Generation

In this paper, we are going to deal with audios using their Mel Spectrogram. A Mel Spectogram is a visual representation of the spectrum of a sound on the Mel scale. Mel Scale [28], proposed by Stevens et al. is a perceptual scale of equally-spaced pitches. The conversion of hertzs into Mels is done using Equation (3).
m = 2595 × log 10 1 + f 700 ,
where f is the frecuency in hertzs.
There are four steps to be carried out to obtain the Mel spectogram given an audio input:
  • Sampling the input wave with windows of a fixed size and step.
  • Compute the Fast Fourier Transform to get the data to the frequency domain.
  • Generate bins using the Mel scale.
  • Generate spectogram breaking down the magnitude of the signal into the frequencies of the Mel scale.
After all the spectrograms are built, all the images were resized to have the same number of columns. Each column represents a unit of time, so it is very common to have different sizes throughout all of our datasets. In our case, all the images were resized to the mean number of columns in all the spectrograms (see Equation (4)).
m e a n _ c o l u m n s = i = 1 N c o l s ( d a t a s e t ( i ) ) N ,
where N is the number of spectrograms in the experiment and c o l s ( x ) the number of columns of image x. Some of these images can be seen in Figure 4.

4.2. Chronic Classification

4.2.1. Experimental Setup

In this work, a Min-Max feature normalization was carried out to set our data in the range of [0,1] which highly improves the performance of the neural network training. After that, we evaluated the class distribution of the dataset taking into account three different values: Chronic, Non-Chronic, and Healthy. In Table 1, we can see the unbalanced distribution.
As we can see, the number of samples of chronic pathologies represents 88.04% of all the dataset. In our experiments, we carried out a classification on the unbalanced dataset with and without using class weights. Furthermore, an augmentation of the less representative classes was done to balance the dataset. This augmentation step was carried out using our proposed VAE.
A convolutional VAE scheme has been implemented in order to generate more samples for the Non-Chronic and healthy classes. In Figure 5, we can see the network configuration.
In Table 2, the new size of each class is shown.
Some examples of the new images generated can be seen in Figure 6.
Once we have our dataset well balanced, we designed a CNN for three class specifications. The scheme of this network can be seen in Figure 4. As we can see, we added some layers such as BatchNormalization and Dropout to avoid the overfitting problem. The output layer consists of three neurons to fit the three class classification.
We used Adam as the optimization algorithm and categorical crossentropy as the loss function. Before training, a train-test split (80-20) was carried out in order to clearly distinguish between the data used for training and that used to evaluate the classifier.
In order to avoid random factor, a 10 cross validation has been carried out in all the experiments showing in all the tables the mean value of the metrics. The intermediate results for the proposal can be shown in Table 3.
As we can see, the standard deviation is very small, which indicates the good generalization of the method using different dataset splits.

4.2.2. Chronic Classification Results

We carried out five different classifications using the CNN scheme shown in Table 4.
The first two classifications were made without a data augmentation process, which led to a very unbalanced training. In the first experiment, we trained our data without modifications, while for the second, we calculated the training weights for each class based on their number of elements. The rest of the classification were made by adding the new elements generated with SMOTE, ADASYN, and our convolutional VAE network to the data set.
We used Sensitivity (Recall), Specificity, and Score metrics defined in the same way as the authors did in Reference [11]:
S e n s i t i v i t y = C c h r o n i c + C n o n c h r o n i c N c h r o n i c + N n o n c h r o n i c ,
S p e c i f i c i t y = C h e a l t h y N h e a l t h y ,
S c o r e = S e n s i t i v i t y + S p e c i f i c i t y 2 ,
where C represents the correctly recognize samples, and N represents the total of the specified class.
In Table 5, we can see the metrics obtained with the five experiments over the chronic classification. Furthermore, well-known metrics, such as precision, recall, and F-Score, have been calculated.
The dataset oversampled using VAE achieved the best results with all the metrics. Whereas sensitivity has very high values in all the cases, specificity, precision, and recall show a very poor performance in all the other cases due to the miss-classification of the healthy class. It is also important to notice the high classification of healthy individuals according to the precision score. This is very important due to the high risk of classify a non-chronic or, even worse, a chronic disease as healthy.
In Figure 7, the confusion matrix obtained for the unbalanced classifications and the best oversampling technique are shown.
As we can see, in all the experiments, the Chronic class obtains the better results. However, the unbalanced experiments show a very bad performance in the healthy classification, obtaining 0% of Specifity. Our proposal with the balanced dataset demonstrate that by adding new synthetic Mel Spectrogram created with a convolutional VAE yields very good classification for all the classes.
Furthermore, in Figure 8, we can see a comparison between our proposal and the methods found in the state-of-the-art of the ICBHI dataset. The results show that our method improves all of the papers in the state-of-the-art.

4.3. Pathology Classification

4.3.1. Experimental Setup

As we did in the chronic classification, the first step we carried out was a normalization using the min-max scaler techinque. A study of the distribution of the pathology classes was done, and it demonstrates that 86.20% of the samples belong to COPD disease (see Table 6).
LRTI and asthma have just two and one samples, respectively, so we decided to ignore them for our classification. The same augmentation data algorithm has been carried out to increase the number of samples of all the diseases except that of COPD. In the end, our dataset was made up of a total amount of 4874 spectrograms. We used the same CNN as in Section 4.2.1, except for the output layer which, in this case, has six neurons, one for each class. All of the training parameters were also the same.
In order to avoid random factor, a 10 cross validation has been carried out in all the experiments showing in all the tables the mean value of the metrics. The intermediate results for the proposal can be shown in Table 7.
As we can see, the standard deviation is also very small on pathologies dataset, which indicates the good generalization of the method using different dataset splits.

4.3.2. Pathology Classification Results

As we did in the chronic classification, we carried out five different classifications using the CNN scheme shown in Table 4 for unbalanced, weighted and balanced datasets. For the classification of most pathologies, the more challenging one, the same metrics were defined in the following way:
S e n s i t i v i t y = C U R T I + C C O P D + C B r o n c h i e c t a s i s + C P n e u m o n i a + C B r o n c h i o l i t i s N h e a l t h y + N n o n h e a l t h y ,
S p e c i f i c i t y = C h e a l t h y N h e a l t h y ,
S c o r e = S e n s i t i v i t y + S p e c i f i c i t y 2 .
In Table 8, the aforementioned metrics were calculated for each experiment.
As we can see in the table, the behavior is exactly the same as in the chronic-non-chronic detection. Dataset augmented using our VAE architecture proposal, outperforms with all the metrics the performance shown by all the other augmentation techniques and even with the raw dataset. In comparison with the ternary classification, on this experiment just VAE augmented dataset achieved optimal results according to Sensitivity. Taking into account the F-Score value, which is one of the most reliable metrics, of our proposal outperforms the second better method (ADASYN augmentation) by more than 72%.
In Figure 9, the three confusion matrix obtained for the two unbalanced and the best balanced dataset are shown.

5. Conclusions and Future Work

In this article, a new procedure has been proposed to detect respiratory pathologies. In the analysis of medical data, it is very common to have very unbalanced data sets. In our work, we proposed a convolutional variational autoencoder to increase the rare classes. We transformed all respiratory audios into Mel Spectrograms to work with convolutional networks. These types of networks have very fast learning using GPUs and can learn the most relevant characteristics of the images analyzed for themselves without the need for a description step. We carried out two different locations; one for detecting chronic, non-chronic, and healthy pathologies in breathing and the other for identifying these pathologies from each other. In the first experiment, results showed 0.991 of sensitivity and a 0.994 of specifity outperforming all the studies of the state-of-the-art. Furthermore, a new and more challenging experiment with five different (and the healthy one) classes was carried out, with promising results, with the same CNN with a 0.988 score in sensitivity metric and a 0.986 in specificity.
With these results, we can conclude that using Mel Spectrograms and CNNs, pathologies in sounds of breaths can be easily classified even when the training dataset is unbalanced using convolutional variational autoencoders for augmenting the classes with fewer samples.
For future work, we will keep working on the idea of using CNN to deal with variable length audios. The ICBHI dataset samples have similar audio lengths and it would be interesting to be able to train and predict our data without crop or resize the spectrograms no matter how long the audio is.
In addition, it would be nice to be able to detect which parts of the spectrogram emphasizes the disease in order to determine maybe new understandable symptoms by the specialists or even found multiples diseases in the same sample but at different time.

Author Contributions

Conceptualization, M.T.G.-O. and J.A.B.-A.; Formal analysis, M.T.G.-O. and I.G.-R.; Investigation, M.T.G.-O., I.G.-R. and C.B.; Methodology, M.T.G.-O. and H.A.-M.; Software, M.T.G.-O. and J.A.B.-A.; Validation, M.T.G.-O., I.G.-R. and C.B.; Writing—original draft, M.T.G.-O.; Writing—review & editing, J.A.B.-A., I.G.-R., C.B. and H.A.-M. All authors have read and agreed to the published version of the manuscript.

Funding

This study will be funded by “University of León”.

Acknowledgments

We gratefully acknowledge the support provided by the Consejería de Educación, Junta de Castilla y León throught project LE078G18. UXXI2018/000149. U-220.

Conflicts of Interest

The authors declare no conflict of interest

References

  1. Gibson, G.J.; Loddenkemper, R.; Lundbäck, B.; Sibille, Y. Respiratory health and disease in Europe: The new European Lung White Book. Eur. Respir. J. 2013, 42, 559–563. [Google Scholar] [CrossRef]
  2. Rocha, B.M.; Filos, D.; Mendes, L.; Vogiatzis, I.; Perantoni, E.; Kaimakamis, E.; Natsiavas, P.; Oliveira, A.; Jácome, C.; Marques, A.; et al. A respiratory sound database for the development of automated classification. In Proceedings of the International Conference on Biomedical and Health Informatics, Thessaloniki, Greece, 18–21 November 2017. [Google Scholar]
  3. Thun, M.J.; Carter, B.D.; Feskanich, D.; Freedman, N.D.; Prentice, R.; Lopez, A.D.; Hartge, P.; Gapstur, S.M. 50-Year Trends in Smoking-Related Mortality in the United States. N. Engl. J. Med. 2013, 368, 351–364. [Google Scholar] [CrossRef] [PubMed][Green Version]
  4. Zorc, J.J.; Hall, C.B. Bronchiolitis: Recent evidence on diagnosis and management. Pediatrics 2010, 125, 342–349. [Google Scholar] [CrossRef] [PubMed][Green Version]
  5. Lema, G.F.; Berhe, Y.W.; Gebrezgi, A.H.; Getu, A.A. Evidence-based perioperative management of a child with upper respiratory tract infections (URTIs) undergoing elective surgery; A systematic review. Int. J. Surg. Open 2018, 12, 17–24. [Google Scholar] [CrossRef]
  6. Islam, M.A.; Bandyopadhyaya, I.; Bhattacharyya, P.; Saha, G. Multichannel lung sound analysis for asthma detection. Comput. Meth. Programs Biomed. 2018, 159, 111–123. [Google Scholar] [CrossRef] [PubMed]
  7. Güler, I.; Polat, H.; Ergün, U. Combining neural network and genetic algorithm for prediction of lung sounds. J. Med. Syst. 2005, 29, 217–231. [Google Scholar] [CrossRef] [PubMed]
  8. Bardou, D.; Zhang, K.; Ahmad, S.M. Lung sounds classification using convolutional neural networks. Artif. Intell. Med. 2018, 88, 58–69. [Google Scholar] [CrossRef] [PubMed]
  9. Chen, H.; Yuan, X.; Pei, Z.; Li, M.; Li, J. Triple-Classification of Respiratory Sounds Using Optimized S-Transform and Deep Residual Networks. IEEE Access 2019, 7, 32845–32852. [Google Scholar] [CrossRef]
  10. Jakovljević, N.; Lončar-Turukalo, T. Hidden Markov model based respiratory sound classification. In Proceedings of the International Conference on Biomedical and Health Informatics, Thessaloniki, Greece, 18–21 November 2017. [Google Scholar]
  11. Perna, D.; Tagarelli, A. Deep Auscultation: Predicting Respiratory Anomalies and Diseases via Recurrent Neural Networks. In Proceedings of the 2019 IEEE 32nd International Symposium on Computer-Based Medical Systems (CBMS), Cordoba, Spain, 5–7 June 2019. [Google Scholar]
  12. Jácome, C.; Ravn, J.; Holsbø, E.; Aviles-Solis, J.C.; Melbye, H.; Ailo Bongo, L. Convolutional Neural Network for Breathing Phase Detection in Lung Sounds. Sensors 2019, 19, 1798. [Google Scholar] [CrossRef][Green Version]
  13. Martínez, D.; Lleida, E.; Ortega, A.; Miguel, A.; Villalba, J. Voice pathology detection on the Saarbrücken Voice Database with calibration and fusion of scores using multifocal toolkit. In Proceedings of the Advances in Speech and Language Technologies for Iberian Languages, Madrid, Spain, 21–23 November 2012. [Google Scholar]
  14. Reynolds, D.A.; Rose, R.C. Robust Text-Independent Speaker Identification Using Gaussian Mixture Speaker Models. IEEE Trans. Speech Audio Process. 1995, 3, 72–83. [Google Scholar] [CrossRef][Green Version]
  15. Sugimoto, K.; Kon, Y.; Lee, S.; Okada, Y. Detection and localization of myocardial infarction based on a convolutional autoencoder. Knowl.-Based Syst. 2019, 178, 123–131. [Google Scholar] [CrossRef]
  16. Fang, Z.; Jia, T.; Chen, Q.; Xu, M.; Yuan, X.; Wu, C. Laser stripe image denoising using convolutional autoencoder. Results Phys. 2018, 11, 96–104. [Google Scholar] [CrossRef]
  17. Kao, I.H.; Hsu, Y.W.; Yang, Y.Z.; Chen, Y.L.; Lai, Y.H.; Perng, J.W. Determination of Lycopersicon maturity using convolutional autoencoders. Sci. Hortic. 2019, 256, 108538. [Google Scholar] [CrossRef]
  18. Fan, Y.; Wen, G.; Li, D.; Qiu, S.; Levine, M.D. Video Anomaly Detection and Localization via Gaussian Mixture Fully Convolutional Variational Autoencoder. arXiv 2018, arXiv:1805.11223. [Google Scholar]
  19. Na, J.; Jeon, K.; Lee, W.B. Toxic gas release modeling for real-time analysis using variational autoencoder with convolutional neural networks. Chem. Eng. Sci. 2018, 181, 68–78. [Google Scholar] [CrossRef]
  20. Li, L.; Li, X.; Yang, Y.; Dong, J. Indoor tracking trajectory data similarity analysis with a deep convolutional autoencoder. Sustain. Cities Soc. 2019, 45, 588–595. [Google Scholar] [CrossRef]
  21. Rahmani, M.H.; Almasganj, F.; Seyyedsalehi, S.A. Audio-visual feature fusion via deep neural networks for automatic speech recognition. Digit. Signal Process. A Rev. J. 2018, 82, 54–63. [Google Scholar] [CrossRef]
  22. Lee, S.; Kwak, M.; Tsui, K.L.; Kim, S.B. Process monitoring using variational autoencoder for high-dimensional nonlinear processes. Eng. Appl. Artif. Intell. 2019, 83, 13–27. [Google Scholar] [CrossRef]
  23. Wang, K.; Forbes, M.G.; Gopaluni, B.; Chen, J.; Song, Z. Systematic Development of a New Variational Autoencoder Model Based on Uncertain Data for Monitoring Nonlinear Processes. IEEE Access 2019, 7, 22554–22565. [Google Scholar] [CrossRef]
  24. Zemouri, R.; Levesque, M.; Amyot, N.; Hudon, C.; Kokoko, O.; Tahan, S.A. Deep Convolutional Variational Autoencoder as a 2D-Visualization Tool for Partial Discharge Source Classification in Hydrogenerators. IEEE Access 2019, 8, 5438–5454. [Google Scholar] [CrossRef]
  25. Chawla, N.V.; Bowyer, K.W.; Hall, L.O.; Kegelmeyer, W.P. SMOTE: Synthetic minority over-sampling technique. J. Artif. Intell. Res. 2002, 16, 321–357. [Google Scholar] [CrossRef]
  26. He, H.; Bai, Y.; Garcia, E.A.; Li, S. ADASYN: Adaptive synthetic sampling approach for imbalanced learning. In Proceedings of the International Joint Conference on Neural Networks, Hong Kong, China, 1–8 June 2008; pp. 1322–1328. [Google Scholar]
  27. Kingma, D.P.; Welling, M. Stochastic Gradient VB and the Variational Auto-Encoder. Available online: https://pdfs.semanticscholar.org/eaa6/bf5334bc647153518d0205dca2f73aea971e.pdf (accessed on 22 February 2020).
  28. Stevens, S.S.; Volkmann, J.; Newman, E.B. A Scale for the Measurement of the Psychological Magnitude Pitch. J. Acoust. Soc. Am. 1937, 8, 185–190. [Google Scholar] [CrossRef]
Figure 1. In (a), a Variational AutoEncoder (VAE) scheme with the mean and standard deviation layers used to sample the latent vector. In (b), the vanilla autoencoder with a simple latent vector.
Figure 1. In (a), a Variational AutoEncoder (VAE) scheme with the mean and standard deviation layers used to sample the latent vector. In (b), the vanilla autoencoder with a simple latent vector.
Sensors 20 01214 g001
Figure 2. A vanilla Convolutional Neural Network (CNN) representation.
Figure 2. A vanilla Convolutional Neural Network (CNN) representation.
Sensors 20 01214 g002
Figure 3. The sounds were recorded from seven different locations remarked in red.
Figure 3. The sounds were recorded from seven different locations remarked in red.
Sensors 20 01214 g003
Figure 4. Examples of the Mel Spectrograms for the chronic, non-chronic and healthy classes after preprocessing.
Figure 4. Examples of the Mel Spectrograms for the chronic, non-chronic and healthy classes after preprocessing.
Sensors 20 01214 g004
Figure 5. VAE scheme configuration for data augmentation.
Figure 5. VAE scheme configuration for data augmentation.
Sensors 20 01214 g005
Figure 6. In (a), the new images generated using the VAE. In (b), the variation between the original and the generated images.
Figure 6. In (a), the new images generated using the VAE. In (b), the variation between the original and the generated images.
Sensors 20 01214 g006
Figure 7. (a) Confusion matrix of the unbalanced dataset. (b) Confusion matrix of the unbalanced dataset with weights in the training. (c) Confusion matrix of balanced dataset using our proposal scheme.
Figure 7. (a) Confusion matrix of the unbalanced dataset. (b) Confusion matrix of the unbalanced dataset with weights in the training. (c) Confusion matrix of balanced dataset using our proposal scheme.
Sensors 20 01214 g007
Figure 8. Comparative between our proposal method and the best results on the state-of-the-art using (International Conference on Biomedical and Health Informatics (ICBHI) dataset.
Figure 8. Comparative between our proposal method and the best results on the state-of-the-art using (International Conference on Biomedical and Health Informatics (ICBHI) dataset.
Sensors 20 01214 g008
Figure 9. (a) Confusion matrix of the unbalanced dataset. (b) Confusion matrix of the unbalanced dataset with weights in the training. (c) Confusion matrix of the balanced dataset using our proposed scheme with VAE.
Figure 9. (a) Confusion matrix of the unbalanced dataset. (b) Confusion matrix of the unbalanced dataset with weights in the training. (c) Confusion matrix of the balanced dataset using our proposed scheme with VAE.
Sensors 20 01214 g009
Table 1. Number of samples for each class.
Table 1. Number of samples for each class.
#
Chronic810
Non-Chronic75
Healthy35
Table 2. Number of samples for each class after data augmentation.
Table 2. Number of samples for each class after data augmentation.
#
Chronic810
Non-Chronic900
Healthy840
Table 3. All the iteration results for the 10 cross validation step using the proposal combination of Variational AutoEncoder (VAE) for data augmentation and a CNN for the classification step on chronic diseases detection.
Table 3. All the iteration results for the 10 cross validation step using the proposal combination of Variational AutoEncoder (VAE) for data augmentation and a CNN for the classification step on chronic diseases detection.
12345678910MeanStd
Sensitivity0.9827090.9855910.9884730.9855910.9884730.9798270.9913540.9769450.9884730.9827090.9850140.004465
Specificity0.9877300.9938650.9877300.9877300.9877300.9938650.9938650.9877300.9877300.9938650.9901840.003168
Score0.9852190.9897280.9881010.9866600.9881010.9868460.9926100.9823380.9881010.9882870.9875990.002701
Precision0.9938271.0000001.0000000.9938270.9938270.9938650.9938650.9938271.0000001.0000000.9963040.003181
Recall0.9877300.9938650.9877300.9877300.9877300.9938650.9938650.9877300.9877300.9938650.9901840.003168
FScore0.9907690.9969230.9938270.9907690.9907690.9938650.9938650.9907690.9938270.9969230.9932310.002427
Table 4. Scheme of the classification neural network based on convolutional layers.
Table 4. Scheme of the classification neural network based on convolutional layers.
Number of LayerName LayerTypeInput ShapeOutput Shape
1cnn_inputInputLayer[(?,128,926,1)][(?,128,926,1)]
2cnn_conv2DConv2D[(?,128,926,1)][(?,126,924,10)]
3cnn_batch_normBatchNormalization[(?,126,924,10)][(?,126,924,10)]
4cnn_activation_reluActivation[(?,126,924,10)][(?,126,924,10)]
5cnn_dropoutDropout[(?,126,924,10)][(?,126,924,10)]
6cnn_maxpooling2dMaxPooling2D[(?,126,924,10)][(?,25,184,10)]
7cnn_flattenFlatten[(?,25,184,10)][(?,46000)]
8cnn_denseDense[(?,46000)][(?,100)]
9cnn_outputDense[(?,100)][(?,3)]
Table 5. Metric results obtained with the five classifications over chronic datasets.
Table 5. Metric results obtained with the five classifications over chronic datasets.
SensitivitySpecificityScorePrecisionRecallF-Score
Dataset unbalanced0.94100.471000
Dataset weighted0.95300.476000
Dataset VAE0.9850.9900.9880.9960.9900.993
Dataset SMOTE0.9500.1670.5580.5000.1670.250
Dataset ADASYN0.9650.8570.9110.8570.8570.857
Table 6. Number of samples for each class.
Table 6. Number of samples for each class.
#
COPD793
Pneumonia37
Healthy35
URTI23
Bronchiectasis16
Bronchiolitis13
LRTI2
Asthma1
Table 7. All the iteration results for the 10 cross validation steps using the proposal combination of VAE for data augmentation and a CNN for the classification step on pathologies dataset.
Table 7. All the iteration results for the 10 cross validation steps using the proposal combination of VAE for data augmentation and a CNN for the classification step on pathologies dataset.
12345678910MeanStd
Sensitivity0.9832340.9925280.9927010.9828850.9838110.9830710.9900250.9915150.9951100.9864200.9881300.004745
Specificity0.9928570.9941860.9869280.9808920.9825581.0000000.9942200.9733330.9808920.9757580.9861620.008867
Score0.9880450.9933570.9898140.9818880.9831840.9915360.9921220.9824240.9880010.9810890.9871460.004638
Precision0.9928571.0000000.9805190.9871791.0000000.9866670.9942201.0000001.0000000.9938270.9935270.006873
Recall0.9928570.9941860.9869280.9808920.9825581.0000000.9942200.9733330.9808920.9757580.9861620.008867
FScore0.9928570.9970850.9837130.9840260.9912020.9932890.9942200.9864860.9903540.9847090.9897940.004757
Table 8. Metric results obtained with the three classifications over pathologies datasets.
Table 8. Metric results obtained with the three classifications over pathologies datasets.
SensitivitySpecifityScorePrecisionRecallF-Score
Dataset unbalanced0.8880.2860.5870.4440.2880.349
Dataset weighted0.9120.0710.4920.2500.0710.111
Dataset VAE0.9880.9860.9870.9940.9860.900
Dataset SMOTE0.8760.4290.6530.5450.4290.480
Dataset ADASYN0.9170.6670.7920.5000.6670.571
Back to TopTop