Next Article in Journal
Review on Seat Suspension System Technology Development
Previous Article in Journal
Interfacial Characteristics of Boron Nitride Nanosheet/Epoxy Resin Nanocomposites: A Molecular Dynamics Simulation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Development of Machine Learning for Asthmatic and Healthy Voluntary Cough Sounds: A Proof of Concept Study

1
Department of Paediatric Anaesthesia, KK Women’s and Children’s Hospital, Singapore 229899, Singapore
2
Anaesthesiology and perioperative Science, Duke-NUS Medical School, Singapore 169857, Singapore
3
Information Systems, Technology, and Design, Singapore University of Technology and Design, Singapore 487372, Singapore
4
Department of Respiratory Medicine, KK Women’s and Children’s Hospital, Singapore 229899, Singapore
5
Department of Children’s Emergency, KK Women’s and Children’s Hospital, Singapore 229899, Singapore
6
Science and Math, Singapore University of Technology and Design, Singapore 487372, Singapore
*
Author to whom correspondence should be addressed.
Appl. Sci. 2019, 9(14), 2833; https://doi.org/10.3390/app9142833
Submission received: 20 May 2019 / Revised: 29 June 2019 / Accepted: 9 July 2019 / Published: 16 July 2019
(This article belongs to the Section Acoustics and Vibrations)

Abstract

:
(1) Background: Cough is a major presentation in childhood asthma. Here, we aim to develop a machine-learning based cough sound classifier for asthmatic and healthy children. (2) Methods: Children less than 16 years old were randomly recruited in a Children’s Hospital, from February 2017 to April 2018, and were divided into 2 cohorts—healthy children and children with acute asthma presenting with cough. Children with other concurrent respiratory conditions were excluded in the asthmatic cohort. Demographic data, duration of cough, and history of respiratory status were obtained. Children were instructed to produce voluntary cough sounds. These clinically labeled cough sounds were randomly divided into training and testing sets. Audio features such as Mel-Frequency Cepstral Coefficients and Constant-Q Cepstral Coefficients were extracted. Using a training set, a classification model was developed with Gaussian Mixture Model–Universal Background Model (GMM-UBM). Its predictive performance was tested using the test set against the physicians’ labels. (3) Results: Asthmatic cough sounds from 89 children (totaling 1192 cough sounds) and healthy coughs from 89 children (totaling 1140 cough sounds) were analyzed. The sensitivity and specificity of the audio-based classification model was 82.81% and 84.76%, respectively, when differentiating coughs from asthmatic children versus coughs from ‘healthy’ children. (4) Conclusion: Audio-based classification using machine learning is a potentially useful technique in assisting the differentiation of asthmatic cough sounds from healthy voluntary cough sounds in children.

Graphical Abstract

1. Introduction

Cough in children is one of the commonest presentations to medical practitioners [1,2,3,4,5,6,7,8,9]. In children, cough is a manifestation of a broad spectrum of respiratory illnesses, ranging from rhinosinusitis, atopy, respiratory tract infection, and inhaled foreign body, to post viral/infectious cough [1,2,10]. In particular, cough is a major symptom associated with asthma [7,8,11]. Cough alone can also be the only manifestation of cough variant asthma [6,12,13], a leading cause of chronic cough in children [6,7,12]. At the same time, cough can also be benign and healthy children might have up to 100 epochs of cough in 24 h [14].
Despite its prevalence and its recognition as a cardinal symptom in asthma, the clinical value of cough is limited by the lack of available validated objective measures of cough sound characteristics, as well as poor reliability and reproducibility [15] of describing cough in a patient’s history. Thus, the clinical diagnosis, assessment of severity, and the progress monitoring of asthma remain a clinical challenge.
It has been recognized that certain unique cough characteristics in children can be of diagnostic value [16], such as the classical barking sound for croup, paroxysmal spasmodic whoop of pertussis, dry staccato for chlamydia [3,16], and brassy cough for tracheomalacia [5]. However, the sensitivity and specificity of many cough sound qualities have not been defined or studied extensively [1,16].
There is a growing interest in using acoustic features to objectively classify cough sounds from different respiratory diseases, although most work is still fairly limited. Progress in automated cough sound classification systems over the years include systems that differentiate between productive and non-productive cough [17], which detect abnormal lung function [18] and enable rapid diagnosis of pneumonia [19].
Particularly for asthma, cough sound spectrograms were first studied in 1989 by Toop [20], where he compared nocturnal and post exercise cough sounds in one 7-year-old child with asthma with those of a normal child—demonstrating a characteristic frequency spectrum for asthmatic coughs. In another study with 24 children aged 4–11 years old, Al-khassaweneh [21] examined time-frequency domain features (wavelet packet decomposition, Wigner distribution) from cough sounds recorded in a noise-free environment, to differentiate between asthmatic and non-asthmatic coughs, yielding 88% and 100% agreement, respectively, with physician diagnosis, when verified with eight subjects. There was no mention whether the subjects tested were from the training cohort; demographic details (other than sex and age), clinical presentations, and clinical history of the recruited children were also unreported. In another study, Amrulloh [22] studied cough sounds from 18 children (9 asthma, 9 pneumonia; aged 1 to 86 months) using an artificial neural network to differentiate between the two pathologies, through multiple audio features (Mel-Frequency Cepstral Coefficients (MFCCs), non-Gaussian score, and Shannon entropy), and achieved a sensitivity, specificity, and kappa of 89%, 100%, and 0.89, respectively.
Although these earlier studies were limited by a fairly small sample size and a limited scope of investigation, and required sophisticated audio recording setups, it was evident that meaningful acoustic parameters could be extracted from cough sounds associated with asthma and could be used as an objective method for assessing asthma.
Leveraging on the advancements in audio signal processing, machine learning, and the ubiquitous use of smartphones, we aimed here to develop an audio-based machine learning solution to automatically differentiate pathological pulmonary cough (asthmatic) from normal–voluntary coughs. To achieve this, we built a clinical database of voluntary cough sounds from children with and without pathological respiratory conditions. These cough sounds were recorded using smartphones. The corresponding respiratory diagnoses of pathologic cough were labeled by physicians. These labeled cough sounds were then used to ‘train’ a classification model using machine learning, which was then ‘tested’ to see if it could predict the pathological etiology of cough sounds. The results of the classification model were compared to the physicians’ label. In this report, we focused on testing the classification model for its ability to differentiate asthmatic coughs from normal–voluntary coughs.
The primary outcome here is an evaluation of the sensitivity, specificity, and accuracy of the derived classification model to differentiate asthmatic coughs from normal–voluntary coughs in children compared with physicians’ diagnoses.

2. Materials and Methods

Here we report results from a prospective cohort pilot study conducted in KK Women’s and Children’s Hospital, Singapore, from February 2017 to April 2018, with an Institutional Review Board (IRB) approval (SingHealth IRB number 2016/2416) registered with ClinicalTrials.gov (Identifier: NCT03169699).

2.1. Study Design and Study Population

Children less than 16 years old were recruited for the study. Informed consent was obtained from their parents. Recruited children were divided into two cohorts—asthmatic and normal–voluntary. The asthmatic cohort consisted of children with active cough that is related to a respiratory pathology (asthmatic in this case). The normal–voluntary cohort consisted of healthy children with no active cough. Exclusion criteria included severe behavioral conditions, an inability to cooperate, and children nursed in isolation.
Children in the asthmatic cohort were recruited from the Emergency Department, Respiratory Ward, and Clinic and Day Surgery Unit of the Hospital. Children in the normal–voluntary cohort were recruited from the Day Surgery Unit. These children were referred by their physicians and nursing team to the investigating team.
In both cohorts, children were instructed to give a series of voluntary coughs. These cough sounds were recorded at 44.1 kHz, using a dedicated Samsung 6 smartphone (Samsung Group, Seoul, Korea) in the ward, so as to collect the cough sounds in as ecological a manner as possible (i.e., with ambient background ward sounds which included crying, babbling, monitoring alarms, staff talking, etc.). For warded children in the asthmatic cohort, their cough sounds were collected daily whenever possible, until they were discharged. These cough sounds were then used to construct an audio database representing asthmatic and healthy children.
Demographic data such as age, weight, and medical history were referenced from the patient’s medical record. For the asthmatic cohort, the duration of coughs, clinical diagnosis, and results of physical examinations and investigations at the time of presentation of the cough were noted. The respiratory diagnosis related to the cough in question was determined by the attending physician, based on the patient’s history and examination, and the investigation of the chest radiograph, as required.
In this study, coughs from children diagnosed with asthma, exacerbation of asthma, or bronchial reactivity were grouped together under the asthmatic cohort. As bronchial hyper-responsiveness [23] is a common feature in classical asthma and cough-variant asthma, we thus chose to include the bronchial hyperactivity diagnosis together with the asthmatic cohort.
Further, children with asthmatic cough with concurrent respiratory conditions, such as respiratory tract infection, allergic rhinitis, or both, were excluded, as these conditions might confound the quality and characteristics of the asthmatic cough sound in question. Children re-admitted into the hospital within a week after discharge were also excluded from our analysis; this was to eliminate the cases of poor progression of disease or poor response to medication.

2.2. Cough Data Processing

2.2.1. Audio Pre-Processing

The cough sounds were stored in .m4a format and were segmented to form different cough entities corresponding to each cough’s diagnosis label, using Adobe Audition CS6. The cough sounds were pre-processed to allow for more precise and consistent modeling using Matlab 2017b (MathWorks, MA, United States). This process consisted of detrending, rescaling, and down-sampling of the audio files to 11.025 kHz.

2.2.2. Audio Feature Extraction

Two multidimensional audio features, Mel-Frequency Cepstral Coefficients (MFCCs), and Constant Q Cepstral Coefficients (CQCCs), both representing perceptual cues of the audio spectrum, were extracted from the cough sounds. The choice of MFCCs was motivated by its ubiquity in the audio processing arena. MFCCs are the most commonly adopted acoustic features in automatic speech applications including speaker and language identification [24,25]. CQCCs are features initially developed to detect spoofing in automatic speaker verification systems. They have been reported to outperform other features when detecting original audio from spoofed audio in automatic speaker verification systems [26,27]. Since the children from the normal–voluntary cohort were instructed to cough voluntarily, and thus mimic a sick person, we therefore hypothesized that CQCCs might be able successfully detect such mimicking.

2.2.3. Classification Modeling of Cough Sound

A classification model was built using Gaussian Mixture Model–Universal Background Model (GMM–UBM). Based on this model, classification per diagnosis was performed by calculating the likelihood ratio (LR) [28]. The probability density function for the UBM was estimated using a Gaussian Mixture Model (GMM), for an optimal fit to the data. Both voluntary cough models (from asthmatic and healthy children) were then created from this UBM by adapting it towards a better fit for the normal–voluntary cough data and asthmatic cough data, respectively. These adaptations were achieved via a maximum a posterior (MAP) procedure [29,30]. Here, we built such a classification model based on the MFCCs and CQCCs separately. Finally, LR results from each of these were combined, and referred to in this report as the fused model.
In order to classify unknown audio samples, the corresponding cough audio features were compared against both the normal–voluntary cough model and the asthmatic cough model. Here, LR was calculated as
LR = p ( E / H Normal voluntary   cough ) p ( E / H Asthmatic   cough )
where p ( E / H Normal voluntary   cough ) is the conditional probability of E (the evidence) given the hypothesis ( H Normal voluntary   cough ) that a cough sample is a normal–voluntary cough, while p ( E / H Asthmatic   cough ) is the conditional probability of evidence given the hypothesis ( H Asthmatic   cough ) that a cough sample is asthmatic.
LR >> 1 supports the normal–voluntary cough hypothesis while LR << 1 supports the asthmatic cough hypothesis. We further calculated the Log-Likelihood-Ratio (LLR) from this LR, as LLR = log 10 ( LR ) . Positive LLRs support the normal–voluntary cough hypothesis and negative LLRs support the asthmatic cough hypothesis. The LLR magnitude indicates the strength of support towards the respective hypothesis [31,32].

2.2.4. Train-Test Data Split-Up

The methodology for training, testing, and modeling cough sounds is shown in Figure 1. The cough samples were randomly divided into training and testing sets, using a 70–30% split (this proportion is routine practice in machine learning [33]). In the training phase, the extracted audio features (MFCCs and CQCCs) were used to train the classifier (GMM–UBM) using physician-labeled cough sounds (coughs from asthmatic cohorts versus coughs from healthy cohorts). In the testing phase, the performance of the trained classifier was evaluated by comparing the classifier’s label prediction to the physician’s label.

2.2.5. Performance of the Classification Model

Performance of the classification model was measured on the basis of classification accuracy and was demonstrated using Tippett Plots and Receiver Operating Characteristics (ROC) curves.

Classification Accuracy of the Classification Model

The classification accuracy was estimated by comparing predicted outputs in our test set with the actual outputs [24] and was calculated as follows:
Accuracy   ( % ) = Number   of   correct   predictions Total   number   of   predictions
For this binary classification problem, the above equation for determining accuracy could be rewritten using True Positives (TP), True Negatives (TN), False Positives (FP), and False Negatives (FN), as follows:
Accuracy   ( % ) = TP + TN TP + FP + TN + FN

Tippett Plots

Tippett plots (see Figure 3) are used to show the cumulative proportion of LLR values for both normal–voluntary and asthmatic cough comparisons [34,35]. Given the fact that positive LLR values support the normal–voluntary cough hypothesis and negative LLR values support the asthmatic cough hypothesis, the further apart the curves are (i.e., blue versus red curves), the better the discrimination performance.

Receiver Operating Characteristics (ROC)

The performance of the binary classifier was investigated using the Receiver Operating Characteristics (ROC). The ROC curve was created by plotting the TP rate (sensitivity) (TP would be asthmatic cough classified correctly as asthmatic) versus the false positive rate (1-specificity) for various decision thresholds. The area under the ROC curve (AROC) was calculated to assess the predictability of the audio features to discriminate between the two cohorts.

2.2.6. Statistical Analysis

The sample size was not calculated for this exploratory study. The Student’s t-test was used for parametric continuous data and the Mann-Whitney U test was used for non-parametric continuous data. χ2 test was used for categorical data analysis. Statistical analysis was performed using SPSS 18.0 software. (SPPS Inc., Chicago, IL, USA) and Matlab 2017b. A p-value less than 0.05 indicated statistical significance.

3. Results

A total of 89 children aged 27 to 193 months were recruited in each cohort, as shown in Figure 2. The 89 children in the asthmatic cohort produced a total of 1,192 cough sounds, of which 726 were used for training and 466 were used for testing. A total of 1,140 cough sounds were produced by 89 children in the normal–voluntary cohort, 756 were used for training and 384 were used for testing. Eight to 15 coughs were produced by each child. In the asthmatic cohort, 53 children from Children Emergency and 36 children from Respiratory Ward were recruited. Children from the Respiratory Ward had significantly longer mean duration of cough of 5.22 (3.28) days versus 2.92 (3.02) days (p = 0.003) for children from the Children’s Emergency. Significantly more children from the Children’s Emergency had history of asthma, compared to those in the Respiratory Ward (p = 0.023).
The demographic characteristics of the study population is summarized in Table 1.
The clinical characteristics of the testing and training groups for both the asthmatic cohorts and the normal–voluntary and are summarized in Table 2 and Table 3, respectively.
The results on classification accuracy (Tippett plots and ROC analysis) are presented in Table 4, while the corresponding Tippett plots are shown in Figure 3.
The LLR values resulting from a classification model trained using MFCCs showed a relatively larger range (11.5) for normal–voluntary cough (obtained from the blue line in Figure 3A showing the range of values from –7.5 to 4). For classification models built using CQCCs features and those of a fused model, using both MFCCs and CQCCs, the corresponding range was relatively lower and was found to be 7.8 (−3.8 to 4) and 6.5 (−3 to 3.5), respectively. The zero-crossing point in the Tippett plots resulting from MFCCs and CQCCs were off-centered (with respect to LLR = 0 line), whereas for the fused model, the Tippett plot was rather more symmetric. Further, the cross-over point between the blue and red curves in the fused result was lower, i.e., the number of misclassifications in the fused result was therefore also lower.
The accuracy analysis using ROC curves are shown in Figure 4. The best performance was obtained when fusing MFCCs and CQCCs. In this case, the accuracy, sensitivity, specificity, and AROC for differentiating normal–voluntary coughs from asthmatic coughs achieved 83.76%, 82.81%, 84.76%, and 0.91%, respectively.

4. Discussion

A total of 1,482 asthmatic and normal–voluntary coughs were used in this study to train a GMM–UGM classification model, exploiting audio features such as MFCCs and CQCCs. The resulting classification model was verified using 852 cough sounds from 66 children. As presented above, it is evident that analyzing the cough sounds provides a feasible strategy to help distinguish between asthmatic and healthy patients with reasonably high confidence.
Cough is a major symptom in asthma [7,8,11], a growing global chronic health problem in children. Asthma is characterized by a reversible variable airway obstruction, airway inflammation, and an increased airway responsiveness to stimuli [7,12,36]. While classical asthma is described by the triad of dyspnea, cough, and wheeze, cough alone might be the only manifestation of cough-variant asthma [6,12,13], a leading cause of chronic cough in children [6,7,12,37,38], which can remain undiagnosed. Cough has also been found to signal the onset of asthmatic exacerbation [14]. At the same time, there is a growing concern about overdiagnosis of asthma based on cough alone, in recent years [16]. The clinical diagnosis, assessment of severity, monitoring of asthma progression, and response to therapy is, therefore, a significant clinical challenge. To add to the challenge, healthy children can also have normal epochs of cough [14].
Despite its prevalence and recognition as a cardinal symptom/sign in most respiratory diseases, the clinical value of cough for diagnosis has, up until now, been fairly limited. This is due to the poor reliability and reproducibility of cough reporting and the lack of validated objective measurement and assessment of cough characteristics in terms of severity and type [1,8,15]. If meaningful data could be extracted from the cough sound signals, then cough as a symptom/sign could and should be used as an objective tool for assessment of cough. Cough presents in a wide range of diseases, including asthma, particularly since asthma results in a variable airway narrowing and air flow obstruction [7,12,36]. Therefore, the expiratory expulsion of airflow through the airway and laryngeal structures becomes turbulent, leading to changes in the cough sound. Further, in cases where diagnosis is uncertain, chest radiography is regularly performed as a part of the clinical evaluation [1]; one-fifth of our subjects in the asthmatic cohort underwent chest radiography. From a medical perspective, if assistive diagnosis can be provided by other means, unnecessary radiological exposure can be reduced in children. While spirometry with or without tests of bronchodilator responsiveness is useful in assessing asthma, these tests are not feasible on young children and infants. An automated cough-audio-based classification system, such as the one proposed in this paper, offers the advantage of a contact-free, readily-accessible solution that uses machine-assisted support for diagnosis by physicians and home-screening by parents.
To our knowledge, this report is the first to use such a large sample size for automatic cough sound analysis between asthmatic and normal-voluntary coughs. Additionally, our study is novel as the cough sounds were recorded simply using a smartphone, while interacting with the patient in an ambient ecological setting right in the clinic/ward (i.e., not a noise-free environment, and in situ).
We found that audio features (MFCCs and CQCCs) can be used to train a classification model to distinguish asthmatic versus normal–voluntary coughs, in a pediatric population—we were able to achieve a competitive physician agreement rate, with sensitivity and specificity greater than 80%.
It might be worth noting that in this study, children recruited in hospital settings could present a rather more severe asthma than in primary healthcare settings. At first glance, this might limit the generalization of our findings; however, we see this as a strength in our exploratory study where the severity of the disease might in fact be helpful to amplify the features, using model building for cough classification. Factors such as practicality and feasibility made it favorable to conduct the study in a single hospital center. Future work should certainly include patients recruited from wider primary care settings to build a larger cough dataset that include other acute respiratory diseases.
There are some limitations to our study. First, there were significantly more male participants in the normal–voluntary cohort. This was due to the predominance of male participants in the day surgery patient population, from which children in the normal–voluntary cohort were recruited. Nevertheless, we speculated that any sex bias was minimized through the mean age being below puberty when voice change occurs. Second, correct cough labeling by the attending physician was critical to provide the machine with our classification model. The physicians involved in the labeling of respiratory pathologies were not of the same clinical grade. Diagnoses of respiratory status in the Respiratory Ward and Clinic were made by at least a specialist consultant grade physician. Diagnoses of patients in the Emergency Department were made by specialists as well as residents. However, in our hospital, residents work under the supervision of specialist physicians. We assumed that the diagnosis was appropriate, as patients recruited from the Emergency Department were followed up and none required a re-visit to the hospital. Third, a fifth of our children in the asthmatic cohort had no wheeze on auscultation; we do not know if presence or absence of wheeze or other phenotypic expression might have affected the cough sounds. Last, we did not classify cough into acute or chronic cough, the duration of cough at presentation in children falls within acute cough definition, if it is less than 3 weeks in duration. In future, this study could be repeated in clinical settings with age stratification and greater granularity in terms of the phenotypic variation and duration of cough presented. It might also be interesting to correlate the acoustic findings to spirometry function, in future studies.
As this is merely a pilot study, we have limited the machine learning to cough sounds arising from asthma and bronchial reactivity only, cough sounds from asthmatic children with concurrent respiratory conditions were excluded. Consequently, our classification model could not differentiate asthmatic coughs associated with concurrent respiratory conditions.
We chose to work with GMM–UBM for a few reasons. First, this is a proof of concept pilot study to explore machine learning in cough sound classification, within a pediatric population. Most respiratory sounds investigated thus far were breathing sounds or lung sounds rather than cough sounds. Second, the GMM–UBM technique has proven its efficiency in respiratory sound classification [39,40,41,42], and has been found to perform reliably, even with small data samples. While some recent studies showed that deep learning approaches such as Convolutional Neural Networks are more efficient for respiratory sound classification (notably lung sounds), these algorithms require a large dataset [43,44,45,46,47]. In [48], Bhattacharya reported that for a smaller dataset, UBM–GMM outperforms deep, recurrent neural net models. Given the size of our dataset and the pilot nature of the study in children, we opted for the UBM–GMM. In future work, when our dataset grows larger, we will explore how deep learning approaches can further improve prediction accuracy.

5. Conclusions and Future Work

This was the first large scale study where a machine learning classification model was used to differentiate asthmatic coughs from normal–voluntary coughs. While further work is ongoing, we are already able to report a classification accuracy result exceeding 80%. This shows that computer assisted diagnosis of asthma using cough sounds is a promising clinical screening modality. As an objective assessment tool, it can aid physicians in clinical diagnosis, especially when wheeze is absent. It is also valuable in tracking asthma and its progression and response to medication. It is especially useful in younger children where conventional lung function tests such as spirometry are not practical. Another advantage is the potential of reducing the reliance on radiological investigations for diagnosis. Other benefits of automatic classification of cough sounds using machine learning, include its potential use in telemetric applications (e.g., allowing remote screening by parents—especially beneficial where access to a pediatric specialist is limited, enabling contact-free consultation and assessment, and follow-up in infectious respiratory diseases, etc.)
In the future, this approach could be further extrapolated for preoperative screening and could potentially assess patients with asthmatic coughs, before surgery. Having early knowledge of such respiratory conditions are favorable for surgery scheduling optimization (e.g., preventing last minute case cancellations on the day of surgery).

Author Contributions

Conceptualization, H.H. and S.L.; Methodology, H.H. and S.L.; Patient recruitment, H.H., O.T., K.L., and S.T.; Software and investigation, B.B. and A.K.; Validation, Data curation and formal analysis, B.B., A.K., D.H., H.H., and J.C.; Writing—original and editing, B.B., D.H., H.H., and J.C.; Writing—review, H.H., O.T., K.L., S.T., B.B., A.K., D.H., and J.C.; Supervision, J.C. and H.H.; Project administration, H.H.; Funding acquisition, H.H.

Funding

This work was supported by SMART Innovation Center, Ignition Grant to KK Women’s and Children’s Hospital. Grant Number ING000091-ICT (IGN) Acoustic Analytic Apps for Smart Telehealth Screening–Creating a Big Data Study. The contact person for the SMART innovation grant is Candy Yeo Wai Cheng.

Acknowledgments

We thank Dianna Sri Dewi and Foo Chuan Ping for assistance in recruitment and administrative work related to the grant. We thank the staff at Children’s Emergency, Children Day Surgery Unit, and the Respiratory Clinic and Wards for facilitating the recruitment of patients.

Conflicts of Interest

The authors declare no conflict of interest. The work reported here was supported by SMART Innovation Center, ignition grant to KK women’s and Children’s Hospital. Grant Number ING000091-ICT (IGN) Acoustic Analytic Apps for Smart Telehealth Screening–Creating a Big Data Study. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results”.

References

  1. Shields, M.D.; Bush, A.; Everard, M.L.; McKenzie, S.; Primhak, R. British Thoracic Society Cough Guideline Group. BTS guidelines: Recommendations for the assessment and management of cough in children. Thorax 2008, 63 (Suppl. 3), iii1–iii15. [Google Scholar]
  2. Shields, M.D.; Thavagnanam, S. The difficult coughing child: Prolonged acute cough in children. Cough 2013, 9, 11. [Google Scholar] [CrossRef]
  3. Alsubaie, H.; Al-Shamrani, A.; Alharbi, A.S.; Alhaider, S. Clinical practice guidelines: Approach to cough in children: The official statement endorsed by the Saudi Pediatric Pulmonology Association (SPPA). Int. J. Pediatr. Adolesc. Med. 2015, 2, 38–43. [Google Scholar] [CrossRef] [Green Version]
  4. O’Grady, K.F.; Grimwood, K.; Toombs, M.; Sloots, T.P.; Otim, M.; Whiley, D.; Anderson, J.; Rablin, S.; Torzillo, P.J.; Buntain, H.; et al. Effectiveness of a cough management algorithm at the transitional phase from acute to chronic cough in Australian children aged <15 years: Protocol for a randomised controlled trial. BMJ 2017, 7, e013796. [Google Scholar] [CrossRef]
  5. Chang, A.B.; Gaffney, J.T.; Eastburn, M.M.; Faoagali, J.; Cox, N.C.; Masters, I.B. Cough quality in children: A comparison of subjective vs. bronchoscopic findings. Respir. Res. 2005, 6, 3. [Google Scholar] [CrossRef]
  6. Niimi, A. Cough Variant Asthma A Major Cause of Chronic Cough. Clin. Pulm. Med. 2008, 15, 189–196. [Google Scholar] [CrossRef]
  7. Niimi, A. Cough and Asthma. Curr. Respir. Med. Rev. 2011, 7, 47–54. [Google Scholar] [CrossRef]
  8. Chang, A. Isolated cough: Probably not asthma. Arch. Dis. Child. 1999, 80, 211–213. [Google Scholar] [CrossRef]
  9. Begic, E.; Begic, Z.; Dobraca, A.; Hasanbegovic, E. Productive Cough in Children and Adolescents—View from Primary Health Care System. Med. Arch. 2017, 71, 66–68. [Google Scholar] [CrossRef]
  10. Oren, E.; Rothers, J.; Stern, D.A.; Morgan, W.J.; Halonen, M.; Wright, A.L. Cough during infancy and subsequent childhood asthma. Clin. Exp. Allergy 2015, 45, 1439–1446. [Google Scholar] [CrossRef] [Green Version]
  11. Jesenak, M.; Babusikova, E.; Petrikova, M.; Turcan, T.; Rennerova, Z.; Michnova, Z.; Havlicekova, Z.; Villa, M.P.; Banovcin, P. Cough reflex sensitivity in various phenotypes of childhood asthma. J. Physiol. Pharm. 2009, 60, 61–65. [Google Scholar]
  12. Koh, Y.Y.; Chae, S.A.; Min, K.U. Cough variant asthma is associated with a higher wheezing threshold than classic asthma. Clin. Exp. Allergy 1993, 23, 696–701. [Google Scholar] [CrossRef]
  13. Turcotte, S.E.; Lougheed, M.D. Cough in asthma. Curr. Opin. Pharmacol. 2011, 11, 231–237. [Google Scholar] [CrossRef]
  14. Chang, A.B.; Harrhy, V.A.; Simpson, J.; Masters, I.B.; Gibson, P.G. Cough, airway inflammation, and mild asthma exacerbation. Arch. Dis. Child. 2002, 86, 270–275. [Google Scholar] [CrossRef]
  15. Chang, A.B. Cough, Cough Receptors, and Asthma in Children. Pediatric Pulmonol. 1999, 28, 59–70. [Google Scholar] [CrossRef]
  16. Chang, A.B. Cough: Are children really different to adults? Cough 2005, 1, 7. [Google Scholar] [CrossRef]
  17. Murata, A.; Taniguchi, Y.; Hashimoto, Y.; Kaneko, Y.; Takasaki, Y.; Kudoh, S. Discrimination of productive and non-productive cough by sound analysis. Intern. Med. 1998, 37, 732–735. [Google Scholar] [CrossRef]
  18. Abaza, A.A.; Day, J.B.; Reynolds, J.S.; Mahmoud, A.M.; Goldsmith, W.T.; McKinney, W.G.; Petsonk, E.L.; Frazer, D.G. Classification of voluntary cough sound and airflow patterns for detecting abnormal pulmonary function. Cough 2009, 5, 8. [Google Scholar] [CrossRef]
  19. Abeyratne, U.R.; Swarnkar, V.; Setyati, A.; Triasih, R. Cough sound analysis can rapidly diagnose childhood pneumonia. Ann. Biomed. Eng. 2013, 41, 2448–2462. [Google Scholar] [CrossRef]
  20. Toop, L.J.; Thorpe, C.W.; Fright, R. Cough sound analysis: A new tool for the diagnosis of asthma? Fam. Pract. 1989, 6, 83–85. [Google Scholar] [CrossRef]
  21. Al-Khassaweneh, M.; Bani Abdelrahman, R.E. A signal processing approach for the diagnosis of asthma from cough sounds. J. Med. Eng. Technol. 2013, 37, 165–171. [Google Scholar] [CrossRef]
  22. Amrulloh, Y.; Abeyratne, U.; Swarnkar, V.; Triasih, R. Cough Sound Analysis for Pneumonia and Asthma Classification in Pediatric Population. In Proceedings of the 6th International Conference on Intelligent Systems, Modelling and Simulation, Kuala Lumpur, Malaysia, 9–12 February 2015. [Google Scholar]
  23. Cernatescu, I.; Cinteza, E. It is chronic cough variant asthma a realistic diagnosis in children? Medica 2008, 3, 281–284. [Google Scholar]
  24. Muda, L.; Begam, M.; Elamvazuthi, I. Voice Recognition Algorithms using Mel Frequency Cepstral Coefficient (MFCC) and Dynamic Time Warping (DTW) Techniques. J. Comput. 2010, 2, 138–143. [Google Scholar]
  25. Rabiner, L.R.; Schafer, R.W. Theory and Applications of Digital Speech Processing; Pearson: Upper Saddle River, NJ, USA, 2011. [Google Scholar]
  26. Todisco, M.; Delgado, H.; Evans, N. A new feature for automatic speaker verification anti-spoofing: Constant Q cepstral coefficients. In Proceedings of the Speaker Odyssey Workshop, Bilbao, Spain, 21 June 2016; Volume 25, pp. 249–252. [Google Scholar]
  27. Todisco, M.; Delgado, H.; Evans, N. Constant Q cepstral coefficients: A spoofing countermeasure for automatic speaker verification. Comput. Speech Lang. 2017, 45, 516–535. [Google Scholar] [CrossRef]
  28. Aitken, C.G.; Taroni, F. Statistics and the Evaluation of Evidence for Forensic Scientists.; John Wiley & Sons: Chichester, UK, 2004. [Google Scholar]
  29. Reynolds, D.A.; Quatieri, T.F.; Dunn, R.B. Speaker verification using adapted Gaussian mixture models. Digit. Signal. Process. 2000, 10, 19–41. [Google Scholar] [CrossRef]
  30. Reynolds, D. Gaussian mixture models. Encycl. Biom. 2015, 827–832. [Google Scholar] [CrossRef]
  31. Morrison, G.S. Forensic voice comparison and the paradigm shift. Sci. Justice 2009, 49, 298–308. [Google Scholar] [CrossRef]
  32. Rose, P. Forensic Speaker Identification; CRC Press: Boca Raton, FL, USA, 2003. [Google Scholar]
  33. Crowther, P.S.; Cox, R.J. A method for optimal division of data sets for use in neural networks. In Proceedings of the International Conference on Knowledge-Based and Intelligent Information and Engineering Systems, Melbourne, VIC, Australia, 14–16 September 2005; Springer: Berlin, Germany, 2005; pp. 1–7. [Google Scholar]
  34. Meuwly, D.; Drygajlo, A. Forensic speaker recognition based on a Bayesian framework and Gaussian mixture modelling (GMM). In Proceedings of the A Speaker Odyssey-The Speaker Recognition Workshop, Crete, Greece, 18–22 June 2001. [Google Scholar]
  35. Gill, P.; Curran, J.; Neumann, C. Interpretation of complex DNA profiles using Tippett plots. Forensic Sci. Int. Genet. Suppl. Ser. 2008, 1, 646–648. [Google Scholar] [CrossRef]
  36. Todokoro, M.; Mochizuki, H.; Tokuyama, K.; Morikawa, A. Childhood cough variant asthma and its relationship to classic asthma. Ann. Allergy Asthma Immunol. 2003, 90, 652–659. [Google Scholar] [CrossRef]
  37. Satia, I.; Badri, H.; Woodhead, M.; O’Byrne, P.M.; Fowler, S.J.; Smith, J.A. The interaction between bronchoconstriction and cough in asthma. Thorax 2017, 72, 1144–1146. [Google Scholar] [CrossRef]
  38. Bonjyotsna, A.; Bhuyan, M. Performance Comparison of Neural Networks and GMM for Vocal / Nonvocal segmentation for Singer Identification. Int. J. Eng. Technol. 2014, 6, 1194–1203. [Google Scholar]
  39. Bahoura, M.; Pelletier, C. Respiratory sounds classification using Gaussian mixture models. In Proceedings of the Canadian Conference on Electrical and Computer Engineering 2004 (IEEE Cat. No. 04CH37513), Niagara Falls, ON, Canada, 2–5 May 2004; Volume 3, pp. 1309–1312. [Google Scholar]
  40. Mayorga, P.; Druzgalski, C.; Morelos, R.L.; Gonzalez, O.H.; Vidales, J. Acoustics based assessment of respiratory diseases using GMM classification. In Proceedings of the 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology, Buenos Aires, Argentina, 31 August–4 September 2010; pp. 6312–6316. [Google Scholar]
  41. Bahoura, M. Pattern recognition methods applied to respiratory sounds classification into normal and wheeze classes. Comput. Biol. Med. 2009, 39, 824–843. [Google Scholar] [CrossRef]
  42. Sen, I.; Saraclar, M.; Kahya, Y.P. A comparison of SVM and GMM-based classifier configurations for diagnostic classification of pulmonary sounds. IEEE Trans. Biomed. Eng. 2015, 62, 1768–1776. [Google Scholar] [CrossRef]
  43. Aykanat, M.; Kılıç, Ö.; Kurt, B.; Saryal, S. Classification of lung sounds using convolutional neural networks. Eurasip J. Image Video Process. 2017, 1, 65. [Google Scholar] [CrossRef]
  44. Folland, R.; Hines, E.; Dutta, R.; Boilot, P.; Morgan, D. Comparison of neural network predictors in the classification of tracheal–bronchial breath sounds by respiratory auscultation. Artif. Intell. Med. 2004, 31, 211–220. [Google Scholar] [CrossRef]
  45. Güler, İ.; Polat, H.; Ergün, U. Combining neural network and genetic algorithm for prediction of lung sounds. J. Med. Syst. 2005, 29, 217–231. [Google Scholar] [CrossRef]
  46. Chen, C.H.; Huang, W.T.; Tan, T.H.; Chang, C.C.; Chang, Y.J. Using k-nearest neighbor classification to diagnose abnormal lung sounds. Sensors 2015, 15, 13132–13158. [Google Scholar] [CrossRef]
  47. Ntalampiras, S.; Potamitis, I. Classification of Sounds Indicative of Respiratory Diseases. In Proceedings of the 2019 International Conference on Engineering Applications of Neural Networks, Crete, Greece, 24–26 May 2019; Springer: Cham, Switzerland, 2019; pp. 93–103. [Google Scholar]
  48. Bhattacharya, G.; Alam, J.; Stafylakis, T.; Kenny, P. Deep neural network based text-dependent speaker recognition 2016: Preliminary results. Odyssey 2016, 2016, 9–15. [Google Scholar]
Figure 1. Experimental methodology to train and evaluate the Gaussian Mixture Model (GMM) classifier.
Figure 1. Experimental methodology to train and evaluate the Gaussian Mixture Model (GMM) classifier.
Applsci 09 02833 g001
Figure 2. Subject recruitment, enrolment, cohort creation, and data processing flow diagram.
Figure 2. Subject recruitment, enrolment, cohort creation, and data processing flow diagram.
Applsci 09 02833 g002
Figure 3. Tippett plots showing the performance of the classification model trained using features from (A) Mel-Frequency Cepstral Coefficients (MFCCs), (B) Constant Q Cepstral Coefficients (CQCCs), and (C) fused features.
Figure 3. Tippett plots showing the performance of the classification model trained using features from (A) Mel-Frequency Cepstral Coefficients (MFCCs), (B) Constant Q Cepstral Coefficients (CQCCs), and (C) fused features.
Applsci 09 02833 g003aApplsci 09 02833 g003b
Figure 4. Receiver Operator Characteristic (ROC) curves of the classification model built with (A) MFCCs, (B) CQCCs, and (C) Fused features (area under ROC, shaded).
Figure 4. Receiver Operator Characteristic (ROC) curves of the classification model built with (A) MFCCs, (B) CQCCs, and (C) Fused features (area under ROC, shaded).
Applsci 09 02833 g004
Table 1. Clinical characteristics of the study population. * Independent T-test, ^ Mann-Whitney U test.
Table 1. Clinical characteristics of the study population. * Independent T-test, ^ Mann-Whitney U test.
Demographic Characteristics Normal–Voluntary Cohort
N = 89
Asthmatic
Cohort
N = 89
p-Value
Age (months)—mean (SD)108.86(34.59)102.09(36.22) 0.734 *
Sex (Male:Female)80:960:290.000
Race (Malay:Chinese:Indian:Others)43:38:6:244:24:14:70.012
Weight (kg)—mean (SD)33.74(15.39)32.39(15.43) 0.457 ^
Table 2. Clinical characteristics of the study population in the asthmatic cohort.
Table 2. Clinical characteristics of the study population in the asthmatic cohort.
Asthmatic Cohort, N = 89
Training Group
N = 65
Testing Group
N = 24
p-Value
Demographics Characteristics
Age (months)—mean (SD)104.75(37.97)94.92(30.57) 0.091 *
Sex (Male:Female) 41:2419:50.205
Race (Malay:Chinese:Indian:Others) 33:17:11:411:7:3:30.738
Weight (kg)—mean (SD)32.29(15.17)32.65(16.47)0.963 ^
Duration of cough at presentation (days)—mean (SD)3.54(3.96)4.86(4.91)0.364 ^
Past Medical History, N 0.476 *
None133
Asthma4319
Allergic rhinitis96
Recurrent wheeze71
Clinical Parameters at Presentation of Cough
Shortness of breath (Yes:No)37:2315:70.231
Tmax (°C): mean (SD)37.26(0.48)37.03(0.50)0.016 ^
Respiratory rate/min—mean (SD)27.57(5.59)28.92(6.82)0.400 ^
Heart rate/min—mean (SD)117.38(20.90)112.08(17.55)0.273 *
Auscultation at Presentation of Cough, N 0.189
No added sounds164
Rhonchi/wheeze4414
Crepitation21
Rhonchi/wheeze and Crepitation35
Chest Radiography Performed, N 0.600
Not done5218
Done, negative finding64
Done, positive finding72
Definition of abbreviations: Tmax—maximum temperature; NA—not applicable. * Independent T-test, Population numbers (N) in the Past Medical History might include children with more than one medical condition. ^ Mann-Whitney U test.
Table 3. Clinical characteristics of the study population in the normal–voluntary cohort. Population numbers (N) in the Past Medical History might include children with more than one medical condition, presented non-contemporaneously. * Independent T-test, ^ Mann-Whitney U test, URTI (Upper Respiratory Tract Infection).
Table 3. Clinical characteristics of the study population in the normal–voluntary cohort. Population numbers (N) in the Past Medical History might include children with more than one medical condition, presented non-contemporaneously. * Independent T-test, ^ Mann-Whitney U test, URTI (Upper Respiratory Tract Infection).
Normal–Voluntary Cohort, N = 89
Training Group
N = 71
Testing Group
N = 42
p-Value
Demographics Characteristics
Age (months)—mean (SD)108.81(32.60)107.61(35.47)0.857 *
Sex (Male:Female)63:839:30.744
Race (Malay:Chinese:Indian:Others)34:31:4:219:20:2:10.980
Weight (kg)—mean (SD)32.76(13.89)34.97(16.62)0.680 ^
Past Medical History, N 0.734 *
None5232
Asthma62
Allergic rhinitis107
Recurrent wheeze00
Mild cold or recovered from URTI62
Table 4. Classification accuracy based on Tippet plot and Receiver Operating Characteristics (ROC) analysis.
Table 4. Classification accuracy based on Tippet plot and Receiver Operating Characteristics (ROC) analysis.
Tippet Classification Accuracy (%)Accuracy Using ROC Analysis
Audio FeaturesNormal–Voluntary CoughAsthma
/Bronchial Reactivity Cough
Overall MeanSensitivity (%)Specificity (%)AROC
(95% CI)
MFCCs78.9174.0376.2478.3975.540.84 (0.82–0.87)
CQCCs78.1286.4882.7181.25 84.98 0.92 (0.90–0.93)
Fused Result82.5584.7683.7682.81 84.76 0.91 (0.89–0.93)

Share and Cite

MDPI and ACS Style

Hee, H.I.; Balamurali, B.; Karunakaran, A.; Herremans, D.; Teoh, O.H.; Lee, K.P.; Teng, S.S.; Lui, S.; Chen, J.M. Development of Machine Learning for Asthmatic and Healthy Voluntary Cough Sounds: A Proof of Concept Study. Appl. Sci. 2019, 9, 2833. https://doi.org/10.3390/app9142833

AMA Style

Hee HI, Balamurali B, Karunakaran A, Herremans D, Teoh OH, Lee KP, Teng SS, Lui S, Chen JM. Development of Machine Learning for Asthmatic and Healthy Voluntary Cough Sounds: A Proof of Concept Study. Applied Sciences. 2019; 9(14):2833. https://doi.org/10.3390/app9142833

Chicago/Turabian Style

Hee, Hwan Ing, BT Balamurali, Arivazhagan Karunakaran, Dorien Herremans, Onn Hoe Teoh, Khai Pin Lee, Sung Shin Teng, Simon Lui, and Jer Ming Chen. 2019. "Development of Machine Learning for Asthmatic and Healthy Voluntary Cough Sounds: A Proof of Concept Study" Applied Sciences 9, no. 14: 2833. https://doi.org/10.3390/app9142833

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop