Next Article in Journal
A Review of Ambient Air Pollution as a Risk Factor for Posterior Segment Ocular Diseases
Previous Article in Journal
Scapular Kinematics and Patterns of Scapular Dyskinesis in Rotator Cuff Tears: A Prospective Cohort Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Development of Machine-Learning Models for Tinnitus-Related Distress Classification Using Wavelet-Transformed Auditory Evoked Potential Signals and Clinical Data

by
Ourania Manta
1,*,
Michail Sarafidis
1,
Winfried Schlee
2,3,
Birgit Mazurek
4,
George K. Matsopoulos
1 and
Dimitrios D. Koutsouris
1
1
Biomedical Engineering Laboratory, School of Electrical and Computer Engineering, National Technical University of Athens, 15780 Athens, Greece
2
Department of Psychiatry and Psychotherapy, University of Regensburg, 93053 Regensburg, Germany
3
Institute for Information and Process Management, Eastern Switzerland University of Applied Sciences, 9001 St. Gallen, Switzerland
4
Tinnitus Center, Charité—Universitätsmedizin Berlin, Freie Universität Berlin and Humboldt-Universität zu Berlin, 10117 Berlin, Germany
*
Author to whom correspondence should be addressed.
J. Clin. Med. 2023, 12(11), 3843; https://doi.org/10.3390/jcm12113843
Submission received: 5 February 2023 / Revised: 30 May 2023 / Accepted: 2 June 2023 / Published: 4 June 2023
(This article belongs to the Section Otolaryngology)

Abstract

:
Tinnitus is a highly prevalent condition, affecting more than 1 in 7 adults in the EU and causing negative effects on sufferers’ quality of life. In this study, we utilised data collected within the “UNITI” project, the largest EU tinnitus-related research programme. Initially, we extracted characteristics from both auditory brainstem response (ABR) and auditory middle latency response (AMLR) signals, which were derived from tinnitus patients. We then combined these features with the patients’ clinical data, and integrated them to build machine learning models for the classification of individuals and their ears according to their level of tinnitus-related distress. Several models were developed and tested on different datasets to determine the most relevant features and achieve high performances. Specifically, seven widely used classifiers were utilised on all generated datasets: random forest (RF), linear, radial, and polynomial support vector machines (SVM), naive bayes (NB), neural networks (NN), and linear discriminant analysis (LDA). Results showed that features extracted from the wavelet-scattering transformed AMLR signals were the most informative data. In combination with the 15 LASSO-selected clinical features, the SVM classifier achieved optimal performance with an AUC value, sensitivity, and specificity of 92.53%, 84.84%, and 83.04%, respectively, indicating high discrimination performance between the two groups.

1. Introduction

1.1. Tinnitus

Tinnitus is defined as the perception of a phantom sound and the patient’s reaction to it [1,2]. Most experts distinguish between subjective and objective tinnitus [2,3]. Tinnitus constitutes a common auditory symptom which can lead to severe impairment, especially when comorbidities are present [4]. In most sufferers, tinnitus is not due to medical causes, and there is no treatment available [5,6]. In many aspects, the presence of tinnitus is a heterogeneous and complex condition [7], and it can occur at any age with varying frequencies, intensities, and duration scales.
According to large, independent epidemiological studies, tinnitus affects more than 10% of the general population, whereas 1% of the population considers tinnitus to be their most serious health problem. The prevalence estimates in Europe are expected to double by 2050 [8,9,10]. Although significant scientific progress has been made in recent years [11,12,13,14,15], many patients with tinnitus remain either untreated or receive incomplete treatment. This situation contributes to increased complaints, prolonged suffering, social disengagement, excessive utilisation of healthcare services, and a complex network of referral pathways. Consequently, these result in substantial psychological and financial burdens at both national and global levels [1]. Despite its extremely high prevalence and socioeconomic burden, tinnitus remains a scientific and clinical mystery [16,17].
It is worth mentioning that the degree of tinnitus-related distress experienced by patients ranges from complete absence of discomfort to suicidal tendencies [18], resulting in a spectrum of different conditions that require distinct management and therapeutic approaches [19,20]. The various testing methods are unable to represent the degree to which tinnitus is troublesome on a case-by-case basis, although they are very useful in the diagnosis and planning of therapeutic interventions. For instance, patients with identical audiograms may have varying degrees of tinnitus perception in terms of intensity, severity, and induced disability [21,22]. This weakness in clinical and paraclinical examinations is well compensated by the use of tinnitus self-assessment questionnaires, which have been widely used in the clinical evaluation of tinnitus sufferers [7]. The use of these tests to objectively classify severity is considered to be an extremely useful tool in the hands of health professionals. Although the use of self-reported measures is considered a good practice, it is important to remember that self-assessment involves bias [1], which influences judgments and responses [19,23].
Several theories have been proposed to explain the mechanisms underlying tinnitus. Over the last decades, it has become apparent that tinnitus is closely related to hearing loss, but their degree of severity cannot be correlated. Moreover, the causal relevance of tinnitus is not limited to the cochlea, but probably involves many levels of the central auditory pathway, and even the central nervous system [24]. There is growing evidence that tinnitus etiopathogenetic mechanisms may be related to dysfunction or damage in parts of the auditory pathway [22,24]. This clinical hypothesis motivated auditory pathway testing through the use of auditory evoked potentials (AEPs) of tinnitus sufferers to assess and evaluate the severity of their tinnitus. In other words, given that the auditory pathway includes several stations involved in the conduction of sound, it is hypothesised that each of them could be associated with the occurrence of tinnitus; or, to state it simply, in order for an individual to hear tinnitus, one or more of the above stations or the connections between them must be affected.

1.2. Auditory Evoked Potentials (AEPs)

An AEP is an electrical signal produced by the brain in response to the presentation of a time-locked auditory stimulus [25,26]. The final AEP signal is composed of the average responses to thousands of stimulus repeats [27]. AEPs are a form of non-invasive and non-behavioural test whose main advantages are their simplicity, objectivity, reproducibility, and cost-effectiveness [22]. Based on a subject’s AEP response, audiologists are able to investigate potential obstacles along the neural pathways that lead to the brain. In addition, these signals may be useful for ruling out or confirming hearing impairments, particularly in neonates, and for medico-legal purposes to rule out benign tumours of the acoustic nerves, such as acoustic neuromas [28].
AEPs are classified as early (auditory brainstem responses—ABRs), middle (auditory middle latency responses—AMLRs), or late (auditory late latency responses—ALLRs) based on their occurrence time after the triggering stimulus [29].
In more detail, ABR is a sequence of acoustically stimulated signals that indicates synchronised neuronal activity along the neural pathways leading to the brain. It has a lengthy history of application and is regarded as one of the most reliable electrophysiological methods [30,31]. Within 10 milliseconds after the commencement of a moderately intense click stimulus, the derived ABR consists of five peaks coming from the auditory nerve and brainstem, annotated using Roman numerals (I through V) in capital letters (Figure 1). Wave I of the ABR reflects the activity of spiral ganglion cells in the distal eighth auditory nerve; wave II originates from the globular cells in the cochlear nucleus; wave III is generated by the cochlear nucleus’ spherical cells and globular cells; and waves IV and V are generated by the medial superior olive and its projections to the nuclei in the lateral lemniscus and the inferior colliculus [32,33]. Typically, these electrophysiological responses have an amplitude of less than one microvolt (μV) [28,34].
AMLR is typically recorded in a time window of 80 to 100 milliseconds and occurs around 12 to 60 milliseconds following the external stimulation. It is hypothesised that the thalamus and the auditory cortex are responsible for generating this response. AMLR is a waveform with four waves of interest: two troughs (Na and Nb) and two peaks (Pa and Pb) (Figure 1). The AMLR signal is sensitive to low frequencies, and there is typically a discrepancy of approximately 10 dB between the auditory thresholds measured behaviourally and electrophysiologically [35,36]. The shape of these waveforms varies considerably even among healthy individuals, with the Nb and Pb components appearing inconsistently [28].
ALLR is produced by non-primary cortical areas and is utilised to evaluate the integrity of the auditory system beyond the level of AMLR. It typically occurs 60 to 800 milliseconds after the external stimulus [37].
In brief, AEPs have predictable patterns and consist of discrete waves (peaks and troughs), which are the signal’s primary waves of interest and which are generated by specific stations along the auditory pathway [22]. The major metrics of an AEP are the latencies (the time between the initial auditory stimulus and the peak or trough of a waveform [31]), and absolute amplitudes related to the signal’s waves of interest [38]. Clinicians contemplate these measurements as metrics when interpreting these waveforms.
The objective identification and detection of bothersome tinnitus is a critical step in the proper management and administration of appropriate interventions or the combination of interventions for the patients. A detailed audiological evaluation, including ABR and AMLR analysis, could constitute an objective method for reflecting the functions of the cochlear or auditory nerve to auditory cortex. These electrophysiological methods are not currently included in routine clinical approaches, and have not been clearly correlated with the pathophysiology of tinnitus [39]. However, the utilisation of more objective data could have a strong influence on the way otolaryngologists investigate and understand tinnitus, focused towards an evidence-based approach.

1.3. The Scope of the Study

The major objective of this study is to assess the potential contribution of (early and middle) AEPs and clinical characteristics in determining the profile of patients with subjective tinnitus with reference to the severity or degree of distress manifested by their symptoms. For this purpose, we utilised the data collected in the context of a European tinnitus-related research project, named UNITI [18], and we developed a study workflow, which is briefly described below.
Initially, the patients’ AEP signals were used and their time-domain metrics were extracted. Consequently, statistically significant differences between the low- and high- tinnitus-distress sufferers were found. We then attempted to calculate metrics in the time-frequency domain using the wavelet scattering transform (WST) method and evaluate its performance, which was motivated by the former remarkable implementation of this method in electroencephalogram (EEG) applications [40]. The WST method is a mathematical technique used in signal processing and analysis. It is based on wavelets, which are functions capable of analysing signals at different scales and resolutions [40]. By applying this method, we were able to analyse the waveforms in a more comprehensive manner, capturing detailed information about their time-varying and frequency-related characteristics. To the best of our knowledge, although this method has proven to be very effective in solving EEG signal classification problems, it has not been used so far for AEP signals analysis. Subsequently, seeking to conduct more in-depth research and contribute to the profiling of patients with bothersome tinnitus, we integrated clinical characteristics of sufferers derived from their audiological examinations and responses to questionnaires. In each of the above stages, several known classifiers were implemented in various generated datasets in order to evaluate the effectiveness of the selected features in discriminating the level of discomfort caused by tinnitus and to determine the model with the highest classification performance. The classification algorithms applied were linear, radial, and polynomial support vector machines (SVM), random forests (RF), naive bayes (NB), neural networks (NN), and linear discriminant analysis (LDA). Overall, the various developed classification models included AEP metrics in time and time-frequency domains, as well as AEP metrics combined with clinical characteristics as features.

2. Materials and Methods

2.1. Data Origin, Recruitment Process, and Patient Characteristics

All data in this study were collected and selected from the tinnitus database of the European project “Unification of treatments and Interventions for Tinnitus patients” (Project Acronym: UNITI, Project Number: 848261, H2020-SC1-2019) [18]. The overall goal of UNITI is to deliver a predictive computational model based on existing and longitudinal data that attempts to determine which treatment approach is optimal for a particular patient, based on specific parameters [18]. The project combines clinical, epidemiological, medical, genetic, and audiological data, including electrophysiological signals (ABR and AMLR waveforms) gathered during a randomized clinical trial (RCT) (ClinicalTrials.gov Identifier: NCT04663828) [41]. Patient data in this study were derived from three different EU clinical centres (Hippocrateion General Hospital of Athens, Greece; Klinikum der Universitaet Regensburg, Germany; and Charité—Universitaetsmedizin Berlin, Germany).
The UNITI project consortium has confirmed the recruitment, inclusion, and exclusion criteria for the RCT participants (Table 1). All possible measures were taken to ensure that there was no discrimination in the recruitment, exclusion, or inclusion processes. Participants were not placed in any situation in which there was a possibility of physical, mental, or emotional harm or in any situation that threatened their physical or mental integrity. No monetary incentives were offered.
Initially, a screening visit took place during which all the shortlisted participants were informed about the whole procedure and the recruited ones signed an informed consent form (ICF). The following procedure included a baseline visit and at least two follow-up visits (nine months after the baseline). During the baseline visit, a complete clinical assessment, collecting patients’ individual characteristics, history, and symptoms, was performed. Additionally, the participants responded to various health and tinnitus questionnaires, and electrophysiological measurements (ABR and AMLR) were conducted. In addition, any comorbidities or concomitant medications or treatments were recorded. All participating patients were required to make at least four visits to the respective clinical centres (1. screening, 2. baseline, 3. interim visit, and 4. final visit) over a period of at least 10 months [35]. Screening and baseline visits could be separated or combined. For the purposes of this study, data were derived exclusively from the screening and baseline visits so that the clinical data were time-corresponded to the waveform recordings.

2.2. Electrophysiological Measurements

All the waveforms were recorded and extracted with the Interacoustics Eclipse system (module EP25) [44]. This system offers the possibility of exporting raw measurements of auditory evoked potentials in .xml (Extended Markup Language) files. The exported files did not contain any patient data in violation of their privacy and security aspects, as stated by the EU General Data Protection Regulation (GDPR). The clinicians of the UNITI consortium pre-agreed on a standardised protocol for electrode placement and recording setups that was utilised by all clinical centres. This contributed to the uniformity of the collected data’s structure and quality, enabling comparisons and group analyses.
For the recording of the ABR waveforms, the type of stimulus used was a click, and the repetition rate was 22 stimuli per second at an intensity level of 80 nHL. The recorded signal was filtered with a high-pass filter set at 33 Hz, 6 dB per octave, and a low-pass filter set at 1500 Hz, and the sample rate was 30 kHz. For recording the AMLR waveforms, the stimulus used was a 2 kHz tone burst with a duration of 28 sine waves, presented at a rate of 6.1 Hz per second and at an intensity level of 70 dB nHL. The recorded signal was filtered with a high-pass filter set at 10 Hz, 12 dB per octave, and a low-pass filter set at 1500 Hz, and the sample rate was 3 kHz. More details on the settings used for the stimulus and acquisition parameters of each test can be found in Table 2.
There were important parameters in the .xml files that had to be comprehended and used in order to reconstruct and visualise the ABR and AMLR records through the R programming language. Although there is not a publicly accessible .xml schema for the exported files, the manufacturer’s “Additional Information” manual for the EP25 module (https://www.manualslib.com/products/Interacoustics-Eclipse-Ep25-11647463.html, accessed on 5 December 2022) contains a description of the .xml header [45].
The R programming language and its accompanying packages, XML [46], xml2 [47], ggplot2 [48], signal [49], seewave [50], tuner [51], gsignal [52], MIMSunit [53], and base [54], were utilised to read the exported .xml files, rebuild the waveforms, and extract the associated data metrics.

2.3. Overall Study Workflow: From AEP Metrics to Classification Models Building

The main purpose of this study was to utilize the AEP signals and various clinical characteristics in order to distinguish between low- and high-tinnitus-distress patient groups. Towards this goal, we extracted various metrics from the AEP signals and utilised them to build machine learning models. More specifically, we considered classical time-based features, which are also used as metrics by clinicians to evaluate these signals, and proceeded with more sophisticated wavelet scattering features in the time-frequency domain. In addition, clinical data from the UNITI project were integrated as input features into the classification models, aiming to highlight the predominant features for tinnitus-related distress. All the steps involved in our study’s workflow for building the various classification models are presented in Figure 2 and detailed in the following subsections.

2.4. Descriptive and Statistical Analyses in Time Domain

Statistical analyses were performed in continuation of previous studies [22,32,55] to determine the electrophysiological differences of patients suffering from subjective tinnitus, based on their manifested discomfort, ranging from mild/moderate distress to severe/catastrophic distress. We attempted to identify statistical differences on the time-domain core metrics of the AEP signals, i.e., the amplitudes and latencies of the waves of interest, which are also used by clinicians to interpret and evaluate the relevant waveforms. The metrics in which statistically significant differences were identified were selected as input features for the classification algorithms. A p-value of less than 0.05 was considered statistically significant, and parameter estimates were presented with their 95% confidence intervals (CI). Descriptive analyses included mean scores followed by standard deviations (SDs), as well as medians followed by minimum and maximum values for the continuous variables. T-tests for independent samples with equal or unequal variances (e.g., homo- and heteroscedastic samples) were used as appropriate to show whether the differences between the selected waveform metrics of compared groups were statistically significant.
Peaks and troughs of each waveform were annotated by the two automated tools for auditory evoked potential wave detection and annotation that were developed in the context of another study [56]. All statistical analyses and graphs were created using the R programming language, through the RStudio interface (version: 4.2.0) and its accompanying packages. Moreover, it was crucial to determine whether the samples followed a normal distribution. This can be examined using either analytic or graphical methods. However, the computed p-value is influenced by the sample size in analytical tests for normal distribution. In the present study, a large number of waveforms were employed. Hence, in order to avoid erroneous small values in the calculated p-values for testing normal distributions, graphical methods were selected as the most appropriate methods, particularly quantile–quantile plots (QQ plots). From the visualisation of the QQ plots, it emerged that the compared samples followed a normal distribution, which is why the parametric t-tests for independent samples (or unpaired t-tests) were chosen. Figure S1 shows indicatively the QQ plot of the dependent variable “Pb latency” for the obtained groups based on their THI scores. Through the leveneTest function of the car package (version: 3.1-0) [57,58], Levene’s tests were conducted to test the homogeneity of variance hypothesis for independent samples’ t-test. If Levene’s tests indicated unequal variances, then Welch’s t-tests (Welch’s unequal variances t-tests) were used to compare the groups, which was an alternative to the traditional analysis of parametric tests. For the two-tailed t-tests for independent samples, the function t.test of the stats package (version: 3.6.2) [59] was used, with a value of either FALSE or TRUE in the argument var.equal, depending on the Levene’s test results for variances. The t-test for independent groups determined whether there was a difference between two independent groups. However, the p-values do not indicate the strength of the difference, but only whether the difference is significant or not [60]. The effect size of the difference is widely used for meta-analysis or power analysis and demonstrates how “strong” the difference between the groups is. In this study, along with the statistical analyses, the effect sizes were calculated. To compute the standardized effect sizes with Cohen’s d, the cohen.d function of the effsize package (version: 0.8.1) [61] was used.
In the first phase, statistical analyses were performed using the THI questionnaire score [43] as the sole classification criterion. Therefore, all the electrophysiological data were divided into two groups, reflecting the patients’ degree of distress derived from tinnitus. As presented in Table 1, one criterion for the inclusion of participants in the study was whether they had a score of greater than or equal to 18 on the THI questionnaire. Therefore, even in the case of the minimum acceptable score, discomfort was present. A score of 48 was chosen as the optimal cut-off threshold between the two distress groups, after observing the participants’ score distribution and (a) seeking a numerical balance between the two groups being compared, and (b) isolating the patients with severe to catastrophic (bothersome) tinnitus in one category. Hence, the first group included all waveforms of participants with a THI score greater than or equal to 48, classifying them in the severe to catastrophic tinnitus-distressed group, while the second group included all waveforms of participants with a THI score less than 48, classifying them in the mild to moderate tinnitus-distressed group. The results of these statistical analyses were utilised in the feature selection phase for the time-domain waveform metrics. In particular, the waveform metrics for which statistically significant differences between the compared groups were identified (p-value < 0.05) were used as features in the relevant classification models.
Subsequently, a more in-depth statistical analysis was performed. In this case, more stringent criteria were applied to categorise the compared groups, based on acoustic level and gender. These two factors influence waveform metrics, based on the literature [22,32,55]. In particular, based on the audiometric thresholds at octave frequencies ranging between 250 Hz and 8 kHz, each ear was classified into one of three groups: normal hearing [0–20 dB HL], mild hearing loss [21–60 dB HL], and severe hearing loss [>61 dB HL]. Each group was then divided into four sub-groups according to the participant’s gender and score on the THI questionnaire. Therefore, a total of twelve sub-groups emerged. Then, descriptive and statistical analyses of the groups under comparison were undertaken.

2.5. Wavelet Scattering Transform in Time–Frequency Domain

The use of wavelet scattering transform (WST) for signal processing was introduced by Prof. Stéphane Mallat and has grown in popularity over recent decades [62,63]. Mallat et al. proved the WST’s ability to retrieve trustworthy information at various scales. This transform may yield time and frequency resolutions that are translation-invariant, deformation-stable, and maintain high-frequency classification information [64,65]. In other literature, it has been demonstrated that wavelet scattering coefficients are more insightful than Fourier transform coefficients when handling signals with brief variation or minor deformation and rotation invariant [62,66,67]. In addition, WST combines the benefits of conventional and deep learning methodologies [68], integrating common properties of multiscale contractions, the linearization of hierarchical symmetries, and sparse representation.
In summary, wavelet scattering networks perform three main tasks that comprise a deep network: convolution, non-linearity, and pooling. In this case, convolution is performed by wavelets, the modulus operator serves as a non-linearity, and filtering with wavelet low-pass filters is analogous to pooling [69]. The filters in the scattering network are established a priori, rather than learnt, compared to deep convolutional networks, which is a fundamental contrast between the two frameworks. As the scattering transform is not required to learn the filters, it can be often used successfully in situations where there is a shortage of training data [69]. Scattering networks enable the automated generation of features that minimise differences within a class, while maintaining discriminability across classes [64,69]. Therefore, the WTS can be used as an automatic robust feature extractor for classification, since its extracted features are known to be translation or shift invariant, and stable against time-warping deformations. The features extracted using the scattering framework can then be inputted into a deep-learning or machine-learning model for classification. Consequently, we conducted AEP signal analysis through the use of wavelet scattering networks. To the best of our knowledge, this has not been carried out before in any related studies.
In more depth, the WST is a cascaded decomposition and convolution of a signal with wavelets, followed by complex modulus and local averaging [70]. The convolution of the signal x with the dilated mother wavelet ψ with a centre frequency of λ (i.e., x*ψλ) is the initial step in computing the WST. The convolved signal oscillates on a 2j scale, and averaging such a signal yields zero. The convolved signal (i.e., |x*ψλ|) is subjected to a nonlinear (modulus/rectifier) operator in order to eliminate these oscillations (i.e., complex phase). This operation increases the frequency of a signal by a factor of 2 and can be used to compensate for information losses due to downsampling. Finally, a time-average, low-pass filter φ is applied to the absolute convolved signal (i.e., |x* ψλ|*φ). Therefore, during a half-overlapping time window of size 2j, the first-order scattering coefficients are defined as the average absolute amplitudes of wavelet coefficients for any scale (i.e., 1≤ j ≤ J) and are acquired with:
S1x (t, λ1) = |x*ψλ1|*φ.
The repetition of the aforementioned procedures applied to each of |x*ψλ1| results in the calculation of the second-order scattering coefficients, i.e.,
S2x (t, λ1, λ2) = ||x*ψλ1|*ψλ2|*φ.
By repeating the procedure above, the higher-order (i.e., m ≥ 2) wavelet scattering coefficients can be estimated as
Smx (t, λ1, …, λm) = |||x*ψλ1|*ψλ2|…*ψλm|*φ.
The zero-order scattering coefficients, which are derived with a time-average S0x (t) = x*φ, explain the local translation invariance of the signal. The averaging operation at each stage eliminates the high-frequency contents of the convolved signal, which can be recovered by convolving the signal with the wavelet in the subsequent stage. With each additional layer, the energy of the scattering coefficients diminishes, with the top two levels holding 99% of the energy [71]. Additionally, Ahmad et al. [72] determined that an order of 2 is ideal for using WST to extract features from EEG.
The process for calculating the WST coefficients at each level is shown in Figure 3, and the aggregated coefficients were then employed as the features.
In our study, one wavelet scattering network per each AEP subtype was used as a feature extractor for the classification purposes. These networks were applied in MATLAB and performed separately on each AEP subtype. A two-order wavelet scattering network was established in both cases. The steps carried out in order to extract the features from the two wavelet scattering networks are described hereafter.
Initially, the raw ABR and AMLR signals were extracted in order to be further analysed. There were 496 waveforms per test, and 450 samples, i.e., generated amplitude values (in µV), per waveform. Then, the signals were imported to MATLAB and the two wavelet-time scattering networks were constructed. Specifically, the key parameters for the scattering networks that needed to be specified were the scale of the time invariant [73] and the quality factor for the scattering filter band; that is, the number of wavelets per octave in each of the wavelet filter banks [68,74]. The wavelet transform discretises the scale using the specified number of wavelet filters [75]. In many applications, the cascade of two filter banks is sufficient to achieve good performance, and therefore we used this number in our two wavelet scattering networks.
After constructing the two scattering networks, we obtained the scattering coefficients for the two signal types. In both applications, the natural logarithms of the scattering coefficients were calculated [73,76]. Each WST yielded a scattering paths-by-scattering windows-by-signals tensor, which was transformed into a matrix that could be further fed into machine-learning classification algorithms.
The wavelet-scattering method creates a large number of features, so we decided to limit the number of features for efficiency purposes. Wavelet scale averaging [77] was used in our analysis in order to minimise dimensionality by transforming the two-dimensional feature matrix of each signal into a one-dimensional feature vector to meet the input requirements of each classifier. In particular, this method averages over the wavelet scale (time window) dimensions and outputs the averaged coefficients.
For every AEP subtype, a unique dataset that included the final resulting wavelet scattering coefficients was extracted. The first dataset included the coefficients obtained from the WST of the ABR signals, and the second the coefficients obtained from the WST of the AMLR signals. Moreover, a combined dataset including the coefficients of both AEPs was created. The coefficients included in these datasets were used as features for the development of machine learning models (the 40, 65, and 105 ABR and AMLR features that are referenced in Figure 2).

2.6. Patients’ Clinical Data Integration

The clinical data were extracted from the UNITI database and were collected during the patients’ visits to the clinical centres. The dataset included 1302 patient visits, and every visit contained 630 variables, representing all the characteristics that could be filled in. The vast majority of these variables concerned answers to questionnaires, and the remaining columns concerned the results of the audiological measures (otological examination, pure tone audiometry [78], tinnitus pitch/loudness match, tinnitus maskability, residual inhibition). The clinical data contained information from a total of 27 questionnaires. The selection of questionnaires in each clinical centre varied according to the centre’s routine clinical practice and individual patient characteristics. In the context of the UNITI project, a consensus was reached on a common set of eight questionnaires used in daily clinical practice. These questionnaires included the Tinnitus Handicap Inventory (THI) [43], the Tinnitus Functional Index (TFI) [79], the Tinnitus Severity (TS) [80], the Mini-Tinnitus Questionnaire (Mini-TQ) [81,82], the Questionnaire on Hypersensitivity to Sound (GUF) [83,84], the Patient Health Questionnaire (PHQ-9) [85], the WHO Quality of Life Questionnaire (WHOQOL-BREF) [86], and the European School of Interdisciplinary Tinnitus Research Screening Questionnaire (ESIT-SQ) [87].
From the above-mentioned questionnaires, we selected a subset of them to be used as features in our classification models. Seeking to develop models which would be unaffected, as much as possible, by the subjectivity of the sufferers [88,89], we considered that questionnaires that resulted in a total unique score should be removed. Only the THI, which was used as the study’s dependent variable, and the GUF [90], which was not directly related to tinnitus and its severity assessment, were kept. Concerning the ESIT-SQ questionnaire, which is meant to collect medical characteristics (related and unrelated to tinnitus), only the questions completed by all three clinical centres were retained.
Hereinafter, the selection of the clinical data for building the classification models is described. The interim and final visits were excluded, selecting baseline values over screening visit values when both were available. The clinical variables with excessive missing values or common values that offered no meaningful information were also removed. In the remaining cases with missing data, the missing values were replaced with the means or most frequent values of the specific variables. Patients with no registered waveforms were also removed from the analysis.
It is noteworthy that some additional steps had to be performed on the data related to audiological tests. The values of the related variables had been derived separately for each ear, and they had to be modified in order to be utilised for the classification models performed on patients. For bilateral tinnitus patients, the values for “hearing loss”, “maximum tinnitus frequency”, “tinnitus matching loudness (in dB)”, and “minimal masking level (in dB)” were averaged between the two ears. For unilateral tinnitus patients, the above variables were assigned values based on the affected ear. Lastly, for head-derived tinnitus patients, the values from the ear(s) where tinnitus matching was detected were utilised.
Following this procedure, a data frame including all the selected clinical features was created (Table S1). Every row in the data frame referred to a unique patient, and each column to a different clinical feature.

2.7. Building Classification Models

2.7.1. Classification Models and Performance Evaluation

Based on the previous analysis conducted, we used the obtained AEP waveforms metrics as features for machine learning models, both in the time and time–frequency domains. The aim of the classifiers was to distinguish the tinnitus ears between low- and high-tinnitus-related distress groups. Various classifiers, including LDA [91], RF [92], NB [93], polynomial, linear, and radial SVM [94], and NN [95], were developed. The target variable was the THI score category (low or high THI score/tinnitus distress). The R programming language facilitated the training, testing, and validation of these classifiers using the caret package (version 6.0–94). Ten-fold cross-validation was implemented for all the models, reporting mean sensitivity, specificity [96], and AUCROC (Area under the ROC Curve) [97] for performance evaluation.

2.7.2. Feature Selection in Clinical Data Using LASSO

To develop a classification model that was both precise and reliable, we investigated the most relevant patient data in terms of discriminating patients into low- and high-distress sufferers. Therefore, we utilised a feature selection method in order to define the most important clinical features and build a classification model based solely on patients’ clinical data. Subsequently, the least absolute shrinkage and selection operator (LASSO) method, with a 10-fold cross-validation and 100 iterations, was performed to select the most relevant features for the classification models. LASSO regression, implemented using the R package glmnet (version 4.1–4) [98], applies a penalty term to the regression coefficients, encouraging sparse solutions and effectively performing feature selection.

2.7.3. Integration of AEP Metrics and Clinical Characteristics

In order to build a more accurate and robust classification model, we integrated the patients’ clinical data with the AEPs characteristics. The analysis considered each patient’s ear as an instance for our model, combining features from both clinical data and waveform metrics. Audiological measurements unrelated to tinnitus-matching ears were removed from the final dataset.

3. Results

3.1. Descriptive and Statistical Analyses in the Time Domain

All waveforms used in the statistical analyses were obtained from 248 participants, of whom 101 were women, aged 24 to 76 years, with a mean age ± SD = 52.21 ± 12.16, and 147 were men, aged 25 to 77 years, with a mean age ± SD = 53.52 ± 12.76. For both AEP subtypes, two waveforms were recorded for each participant, one for each ear, resulting in a total of 496 waveforms per AEP subtype.
The subsequent two subsections expound upon the principal findings derived from the descriptive and statistical analyses of this study. Supplementary Materials, encompassing Tables S2–S9 and Figures S2–S5, provide further detailed information pertaining to the study’s results.

3.1.1. Grouping Using the THI Score

In the initial phase, statistical analyses were conducted using the THI score [43] as the only criterion to divide the study participants into two groups. Waveforms were classified accordingly, with 45.97% attributed to highly distressed tinnitus sufferers and the remaining 54.03% to moderately distressed tinnitus sufferers. Hereinafter, the results are shown as boxplots for the two groups’ amplitudes and latencies (Figure 4, Figure 5, Figure 6 and Figure 7), accompanied by tables including the descriptive statistics and the numerical results of the statistical analyses for all the pairs of compared groups (Tables S2–S5).
Descriptive statistics for the ABR waveforms indicated that sufferers with the lowest THI score had higher mean latencies for all three waves of interest (peak I, peak III, and peak V), while those with the highest THI score exhibited higher mean amplitudes (Figure 4 and Figure 5, Table S2). Conversely, the descriptive statistics for the AMLR waveforms revealed that sufferers in the highest THI score group displayed higher mean latencies and absolute mean amplitudes for all four waves of interest (Na trough, Pa peak, Nb trough, and Pb peak) (Figure 6 and Figure 7, Table S3).
The two-tailed t-tests for independent samples revealed that there were some statistically significant differences between the compared groups. Regarding ABR waveforms, significant differences were observed in peak III and peak V latencies, as well as in peak I amplitude (Figure 4 and Figure 5, Table S4). For AMLR waveforms, all metrics, except Na trough latency, showed statistically significant differences (Figure 6 and Figure 7, Table S5). In all cases, effect sizes indicated a small to medium effect.

3.1.2. Grouping Using the THI Score Combined with Gender and Hearing Level

In a second stage, the compared groups consisted of waveforms from participants with matching hearing levels and genders, as these factors strongly influence waveform metrics. These factors, along with the THI score, were used as classification criteria for the patients’ waveforms, resulting in the emergence of twelve subgroups.
Descriptive statistics for ABR waveforms indicated that subgroups with high THI scores (≥48) generally exhibited lower latency metrics compared to their corresponding subgroups with lower THI scores (<48). However, there were exceptions, including peak III latency in the “females with normal hearing” subgroup and peak V latency in the “males with severe hearing loss” subgroup. Conversely, amplitude metrics showed the opposite pattern, with subgroups with high THI scores showing higher amplitudes compared to their corresponding subgroups with low THI scores (Figures S2 and S3, Table S6). Exceptions were observed for peak I amplitude in the “males with severe hearing loss” subgroup and peaks III and V amplitudes in the “females with normal hearing” subgroup.
Descriptive statistics for AMLR waveforms revealed that subgroups with high THI scores generally exhibited equal or greater latency and amplitude metrics (absolute values) compared to their corresponding subgroups with low THI scores (Figures S4 and S5, Table S7). Exceptions were observed for Na and Pb latencies in the “females with severe hearing loss” subgroup, Na and Nb amplitudes in the “females with normal hearing” subgroup, and Nb amplitude in the “males with normal hearing” subgroup.
The two-tailed t-tests for independent samples revealed statistically significant differences between the compared groups. In the statistically significant differences identified, the effect sizes indicated a medium-to-large or large effect. For ABR waveforms, these differences concerned the following subgroups (Figures S2 and S3, Table S8):
  • “Males with normal hearing” with respect to peak III latency and peak V amplitude;
  • “Females with mild hearing loss” with respect to peak III and peak V latencies, and peak I amplitude;
  • “Males with mild hearing loss” with respect to peak I amplitude;
  • “Females with severe hearing loss” with respect to peak III and peak V latencies, and peak III amplitude;
  • “Males with severe hearing loss” with respect to peak III latency.
Similarly, for AMLR waveforms, these significant differences concerned the following subgroups (Figures S4 and S5, Table S9):
  • “Females with normal hearing” with respect to Nb trough and Pb peak latencies;
  • “Females with mild hearing loss” with respect to Pa peak and Nb trough amplitudes;
  • “Males with mild hearing loss” with respect to Nb trough latency and Na trough, Pa peak, and Nb trough amplitudes;
  • “Males with severe hearing loss” with respect to Na trough amplitudes.
Tables S6–S9 provide descriptive statistics and the results of the statistical analyses for all compared groups. Correspondingly, Figures S2–S5 present boxplots illustrating the amplitudes and latencies of the compared metrics.

3.2. Wavelet Scattering Transform (WST) in the Time–Frequency Domain

3.2.1. The Wavelet Scattering Transform Method

In the context of this study, two filter banks were used in both wavelet scattering networks. In particular, we constructed two wavelet time scattering networks with default filter banks, i.e., eight wavelets per octave in the first filter bank (Q1) and one wavelet per octave in the second filter bank (Q2), in both networks [73]. Figure 8 depicts the quality factors (Q1 and Q2) and their Littlewood–Paley sums in the AMLR wavelet scattering network. For the ABR wavelet scattering network, the signal length was set to 450, the invariance scale was set to 0.006 s, and the sampling frequency was set to 30,000 Hz. For the AMLR wavelet scattering network, the signal length was set to 420, the invariance scale was set to 0.081 s, and the sampling frequency was set to 3000 Hz.
Each WST was performed on the zeroth, first, and second orders after inputting the corresponding wavelet scattering network, and the wavelet scattering coefficients were outputted stepwise. For each signal, the zeroth-order scattering coefficients constituted the convolution of the original signal and the scale function; the first- and second-order scattering outputs constituted a two-dimensional matrix (scattering path x time window) including wavelet scattering coefficients. Based on the constructed scattering networks and their parameters, the AEP signals were presented to the wavelet scattering networks for the wavelet scattering transformations.
The extracted scattering features for ABR signals were 40 × 15 × 496. Each page of the tensor (40 × 15) was the scattering transform of one signal. The WST was critically down-sampled in time based on the bandwidth of the scaling function [73]. In this case, this resulted in 15 time windows for each of the 40 scattering paths. Respectively, the extracted scattering features for AMLR signals were 65 × 7 × 496. In this case, this resulted in 7 time windows for each of the 65 scattering paths.

3.2.2. Dimensionality Reduction in WST

In order to obtain two matrices compatible with the available data frames, i.e., to match the generated wavelet coefficients with the ear from which they were derived, the two tensors had to be transformed appropriately. Specifically, wavelet scale averaging [77] was used for transforming the two-dimensional feature matrix of each signal into a one-dimensional feature vector to meet the input requirements for the classification model building. This method was averaged over the time window (wavelet scale) dimensions and resulted in a vector of averaged coefficients. Therefore, after the application of this method, 40 coefficients were obtained for each ABR signal and 65 coefficients for each AMLR signal. This led to a vector of coefficients for each AEP signal.
The vectors of the scattering coefficients were calculated and extracted using MATLAB. Then, three datasets were created with wavelet scattering coefficients as features. The first one included the ABR WST coefficients, the second one included the AMLR WST coefficients, and the third dataset included the WST coefficients of both AEPs (Table 3).

3.3. Patients’ Clinical Data Integration

Following all the steps described in Section 2.6, a dataset resulted based on the UNITI project data. In the generated dataset, every row was about a different patient, and each column was about a different feature. This dataset contained 248 patients and 33 patients’ characteristics, used as features for our modelling. Table S1 describes in detail the variables that resulted for this dataset, as well as their description and range values.

3.4. Classification Models

This section presents all the performance tables of the classification models developed within the study, followed by the ROC curves of the classifiers with the highest AUC values among the models. Specifically, the tables present the results of the selected measures (AUC, sensitivity, and specificity) that quantified the discriminative accuracy of the seven classifiers in order to compare and identify the classifier with the highest classification performance for each dataset examined.

3.4.1. Time-Domain Models

From the statistical analyses performed, statistically significant differences (p-value < 0.05) were found in the aforementioned metrics between the compared groups. These metrics were used as input features in the relevant classification models. Table 4 presents the resulting input features for our models. Table 5 shows how well each of these classification models performed, and Figure 9 shows the ROC curve of the RF classifier, which had the highest AUC value.

3.4.2. Time–Frequency-Domain Models

In this section, we present the performance of the models developed using the resulting coefficients of the two wavelet scattering networks as input features. Table 6 presents the performances of the models with the 40 coefficients derived from the WST of the ABR signals as features. Table 7 presents the performances of the models with features of the 65 coefficients obtained from the WST of the AMLR signals. Finally, Table 8 presents the performances of the models that used all 105 coefficients from the WST for both AEP signals cumulatively. Similarly, Figure 10, Figure 11 and Figure 12 depict the ROC curves of the classifiers with the highest AUC value for each case.

3.4.3. Integration of Clinical Features

The performance of the classification models developed with all 33 clinical features is presented in Table 9. The ROC curve of the polynomial classifier, which had the highest AUC value, is illustrated in Figure 13.
Applying the LASSO regression, 15 out of 33 clinical data were selected as the most relevant. The names of the selected characteristics, the descriptions of which are included in Table S1, were the following: “Age”, “Height”, “Alcohol”, “Hearing loss 6000”, “Hearing loss”, “Matching type”, “Family history”, “Education”, “Vertigo”, “Frequency”, “Day pattern”, “Number sounds”, “Quality”, “Rhythmic”, and “GUF”. The classification models were developed using these 15 selected features. Their performances are displayed in Table 10. As can be seen from the table, all classification models performed better with the LASSO-selected subset of features. The ROC curve of the polynomial SVM classifier, which scored the highest AUC value, is illustrated in Figure 14.

3.4.4. Integrated Models Combining AEP and Clinical Features

Observing the performances of all the developed models presented until this step, it appeared that models which used the resulting wavelet scattering coefficients of AMLR signals as input features scored the highest values for AUC, sensitivity, and specificity. To generate more accurate classification models using the same classifiers, we incorporated, along with the AMLR wavelet scattering coefficients (n = 65), the 15 selected clinical features as indicated by LASSO. Table 11 shows how well each of these classification models performed, and Figure 15 shows the ROC curve of the radial SVM classifier, which had the highest AUC value.

4. Discussion

The presence of tinnitus can profoundly affect thoughts, emotions, and dispositions. Its heterogeneity among patients results in a range of distress levels, from mild discomfort to the complete disruption of personal and professional life [2]. Objective measurements of tinnitus distress lag behind, with limited utilisation of evidence-based data and objective practices in the medical community. The severity classification currently relies on clinical guidelines, considering the occupational and social impacts [99]. This is conducted through graded self-report questionnaires [100] and structured medical histories [101], which are used for assessing tinnitus severity, distress, and quality of life impact.
The lack of tailored treatments for subjective tinnitus stems from its heterogeneity, impeding effective management for specific patient groups. Establishing an objective and reliable approach to detect and classify tinnitus distress would assist personalised treatment, reduce ENT clinic visits, lower healthcare costs, and enhance patient confidence and satisfaction.
Subjective tinnitus is attributed to aberrant neuronal activity in the auditory cortex, arising from disruptions or modifications in the auditory pathway [102]. This disturbance may lead to a loss of cortical activity suppression and the formation of new neural connections. Conductive hearing loss, caused by factors such as cerumen impaction or otitis media, can also be associated with subjective tinnitus by altering the sound input to the central auditory system. AEPs, such as the ABR waveforms paired with AMLR, offer an affordable and non-invasive means to examine the auditory pathway [103], providing valuable insights that may be beneficial for assessing the tinnitus pathophysiology.
In the present study, we aimed to extract features from both ABR and AMLR signals, as well as other clinical characteristics, and assess their performance in terms of distinguishing between low- and high-tinnitus-distress patient groups. In order to achieve this, we gathered several metrics from AEP signals and used them to build machine-learning models. In particular, we analysed conventional time-domain metrics, for which statistically significant differences were found between the two groups. Motivated by the effective application of the WST in EEG classification problems, we utilised this advanced transform method to extract coefficients as features in the time–frequency domain from the induced ABR and AMLR signals. To the best of our knowledge, this feature extraction method has not yet been applied to AEP signals. In an effort to conduct more in-depth research and contribute to the better profiling of bothersome tinnitus patients, we integrated their clinical characteristics, as determined by audiological examinations and questionnaire responses. After the selection of the most relevant clinical features, these were combined with the AEPs features, and integrated classification models were developed. Several well-known classifiers, including linear, radial, and poly SVM, RF, NB, NN, and LDA, were applied to all the generated datasets in order to evaluate the performance of the selected features in terms of predicting the level of discomfort caused by tinnitus.
The statistical analyses that were conducted using the THI score as a unique criterion for grouping participants revealed statistically significant differences in certain waves’ metrics. However, these differences were accompanied by effect sizes that ranged from small to moderate. The comparisons showing statistically significant differences were used as input features for the classification models. Specifically, the latencies of peaks III (p < 0.001) and V (p < 0.05) as well as the amplitudes of peaks I (p < 0.001) and V (p < 0.05) from the ABR signals, and the latencies of peaks Pa (p < 0.05) and Pb (p < 0.05) and trough Nb (p < 0.05), as well as the amplitudes of troughs Na (p < 0.001) and Nb (p < 0.05), and peaks Pa (p < 0.001) and Pb (p < 0.05) from the AMLR signals, were selected as features.
Comparing people of the same gender with the same hearing threshold level also revealed statistically significant differences in some of the metrics analysed. However, the number of significant differences decreased when comparing subgroups within each AEP subtype. Nevertheless, these comparisons showed moderate to large effect sizes, indicating stronger differences when applying stricter classification criteria. This suggests that a more homogeneous sample reduces extraneous factors that could dilute the effect size, resulting in a clearer relationship between variables. It is important to ensure that stricter classification remains relevant and unbiased. Statistical analyses revealed that alternations in auditory generators of several waves of interest are linked with tinnitus distress. Consequently, a greater or smaller amplitude and prolonged or shorter latency may indicate a problem with the synchronised activity of these generators [104]. Moreover, various neuro-physiological models of tinnitus perception may explain the variation in AEPs’ morphology and metrics among tinnitus sufferers. Neural synchrony appears to be a possible cause of tinnitus perception. However, due to differences in participant recruitment with respect to their tinnitus characteristics, sex, ageing, pharmacological status, and hearing loss, as well as discrepancies in the stimulus and acquisition parameters of the waveforms [105], divergent and contradictory results may arise when analysing AEPs in tinnitus sufferers. Our findings indicate that AEPs may be incorporated into the overall evaluation of tinnitus patients. However, before particular AEPs’ waves of interest can be recommended as biomarkers of tinnitus distress, additional in-depth research with a greater number of patients is necessary. To reach authoritative conclusions on the use of AEPs as a potential clinical diagnostic tool for tinnitus, more stringent criteria for patient categorization, including age range considerations, should be implemented. Consensus among clinicians regarding stimulus and acquisition parameters for recording these AEPs is equally important.
In the time domain, the classification models were built utilising the latencies and amplitudes of AEPs as their features, showcasing encouraging performance outcomes. Specifically, classification models were constructed, employing 11-waveform time-domain metrics with a p-value below 0.05. Among these models, the RF classifier achieved the highest AUC value of 0.8532, a sensitivity of 0.8097, and a specificity of 0.7005.
In the time-frequency domain, the classification models were developed using the resulting coefficients of the two AEP subtypes’ wavelet scattering networks as input features. Initially, models that included only the wavelet scattering coefficients of one AEP subtype were developed. Comparing the performances of these models, the models using AMLR WST coefficients as features significantly outperformed those using ABR WST coefficients on all three performance measures in all seven classifiers. The RF classifier of the AMLR WST features performed best, with an AUC value of 0.8913, a sensitivity of 0.8192, and a specificity of 0.8101. Further progression involved developing models that incorporated scattering coefficients from both AEP subtypes. Of these models, the LDA classifier achieved optimal performance, with an AUC value, sensitivity, and specificity of 0.8972, 0.8486, and 0.8161, respectively. However, the rest of the models using the combined scattering coefficients also proved to be equally reliable, scoring correspondingly high values in both sensitivity and specificity.
One noteworthy observation of the time-frequency domain models’ performances pertained to the close proximity of the AUC values in models that utilised AMLR WST coefficients as features compared to models incorporating WST coefficients of both AEP subtypes (consisting of 105 features). Notably, the AUC value of the radial SVM classifier in the AMLR model outperformed the corresponding classifier in the combined coefficient models. The similarity in AUC values across all cases suggests a stronger influence of AMLR waveforms in categorising the level of tinnitus discomfort compared to ABR signals. These findings are indirectly supported by the results of the statistical analyses, which revealed more statistically significant differences in AMLR than ABR metrics, particularly in cases where categorization relied solely on the THI score.
Across both the time- and the time–frequency-domain models exclusively utilising the ABR WST features, an intriguing pattern emerged: the specificity values consistently trailed behind the sensitivity values, irrespective of the employed classifier. This observation indicates potential challenges faced by these models in accurately identifying subjects with low distress (true negatives), consequently resulting in a heightened risk of false positives. These false positives occur when subjects with low distress are mistakenly classified as having high distress. In stark contrast to the aforementioned models, the models that incorporated the AMLR WST features, either independently or in conjunction with the ABR WST features, demonstrated remarkable performance across all three evaluation metrics: AUC, sensitivity, and specificity. These compelling findings unequivocally showcase the latter models’ exceptional proficiency in detecting and precisely classifying instances from both examined categories. Moreover, these results highlight the AMLR WST coefficients as the most informative features among the two AEP subtypes for the ongoing classification task.
In an effort to increase the performance of our classification models, we integrated patients’ clinical characteristics into our analysis, derived from UNITI’s project database. The relatively poor performances of these models, particularly in terms of specificity values, suggested the need for additional and more informative data for tinnitus distress classification. The better performances of the models, using the ABR and AMLR features, suggest that AEP signals will prove to be highly informative for tinnitus distress.
In order to create more robust classifiers, models that included the AMLR WST coefficients as features, combined with the 15 LASSO-selected clinical features, were developed. These integrated models achieved the highest classification performance, which indicates that the combination of multiple measures as features is the best choice for classification purposes.
As was expected, the use of AEP signals to investigate tinnitus distress has some limitations. Tinnitus is subjective and varies greatly among individuals, posing challenges to accurately quantifying it with AEPs. Furthermore, AEPs provide limited insights into the underlying mechanisms of tinnitus and overlook the psychological and emotional factors contributing to the condition. Signal variability, background noise, and individual differences further constrain AEP analysis. To address these limitations, a comprehensive assessment can be achieved by combining multiple measures such as self-report questionnaires, behavioural assessments, and physiological measurements. Efforts towards standardisation, noise reduction, and tailored approaches are also essential for improving tinnitus evaluation.
Our classification models faced additional limitations that influenced the outcomes. A major limitation was relying on THI questionnaire scores used to categorise sufferers. While self-reported measures are commonly utilised, they introduce bias, affecting judgements and responses [19]. Considering the diverse nature and varying severity of tinnitus, it is crucial for clinicians and researchers to ensure that participants have understood instructions so that they can respond accurately. Although qualified professionals supervised the questionnaires’ completion, precise classification based on scores proved challenging. Additionally, the absence of AEP signals from a control group limited our study further. Including both tinnitus and non-tinnitus signals would have enhanced our understanding of patient differences and allowed for more robust classification models. The latter was something we were unable to explore through the current study.
In conclusion, in this research we investigated the potential of ABR and AMLR signals for classifying ears into low and high distress groups. This study demonstrated that AMRL signals’ scattering coefficients may be efficiently employed for classification tasks, as they yielded excellent AUC values when implemented within the appropriate classifiers. According to our findings, the scattering coefficients derived from the averaged time windows in conjunction with the RF classifier yielded the higher performance. Regarding the interpretation of these results, the AMLR generators were structures along the thalamocortical pathway; however, particular sites are contested [103]. The rostral location of these AMLR generators relative to the ABR generator locations enabled the AMLR to provide insights into the operation of an additional part of the higher auditory system. In accordance with a previous study [106], individuals with tinnitus seemed to have more abnormalities in components of the AMLR when compared to individuals without tinnitus, suggesting alterations in the generation and transmission of neuroelectrical impulses along the auditory pathway. According to our results, it seems that there are differences in AMLR waveforms between sufferers with severe to catastrophic tinnitus distress and those with mild to moderate distress. These results may suggest that tinnitus is not related only to the higher brain areas.

5. Conclusions

Overall, several models were developed in this study in order to classify individuals with tinnitus based on the level of distress they experienced. Observing the models’ performance, and since no such study has been carried out before, we strongly encourage researchers to further investigate the potential use of the WST method for the classification of AEP signals in future studies. One proposal could be to create AEPs wavelet scattering networks by testing different values in their parameters (i.e., the size of the invariance scale, the number of filter banks, and the number of wavelets per octave in each filter bank), which may possibly lead to even higher classification yields. The classification is proposed to be related to both the level of tinnitus severity or discomfort and the presence or absence of tinnitus. Based on the results of our study, the utilisation and analysis of AMLR waveforms through the generated scattering coefficients scored remarkable AUC values. This suggests that AMLR signals could provide insightful information about tinnitus distress. Future investigations are therefore deemed more than necessary to validate the kinds of conclusions drawn from this study.
The study’s added value lies in the development of standardised procedures for profiling patients with subjective tinnitus in a more accurate and meaningful manner. By identifying the most affected and distressed individuals, researchers and clinicians can investigate the crucial features that contribute to the classification of tinnitus sufferers in terms of distress. This research has the potential to enhance the assessment and management of subjective tinnitus, bringing benefits such as uniformity in treatment for adult patients, the utilisation of ongoing research results, the application of sophisticated data analysis tools, and advancements in personalised treatment and systems-based care. Furthermore, these findings have the capacity to contribute to the field’s knowledge, leading to more accurate and effective patient profiling, which could result in better treatment approaches, improved patient outcomes, and a heightened awareness of tinnitus management challenges.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/jcm12113843/s1, title; Figure S1. The QQ plot was used to test whether the dependent variable “Pb latency” followed a normal distribution; Figure S2. Boxplots of ABR waveform latencies based on tinnitus distress in people with common hearing levels and gender; Figure S3. Boxplots of ABR waveform amplitudes based on tinnitus distress in people with common hearing levels and gender; Figure S4. Boxplots of AMLR waveform latencies based on tinnitus distress in people with common hearing levels and gender; Figure S5. Boxplots of AMLR waveform amplitudes based on tinnitus distress in people with common hearing levels and gender; Table S1: The 33 Selected Clinical Variables; Table S2: Descriptive statistics regarding the tinnitus distress influence on ABR components; Table S3: Descriptive statistics regarding the tinnitus distress influence on AMLR components; Table S4: Statistical differences regarding the tinnitus distress influence on ABR components; Table S5: Statistical differences regarding the tinnitus distress influence on AMLR components; Table S6: Descriptive statistics regarding the level of tinnitus distress on the components of the ABR waveforms in people with common hearing levels and gender; Table S7: Descriptive statistics regarding the level of tinnitus distress on the components of the AMLR waveforms in people with common hearing levels and gender; Table S8: Statistical analyses regarding the level of tinnitus distress on the components of the ABR waveforms in people with common hearing levels and gender; Table S9: Statistical analyses regarding the level of tinnitus distress on the components of the AMLR waveforms in people with common hearing levels and gender.

Author Contributions

Conceptualization, O.M. and M.S.; methodology, O.M.; software, O.M.; validation, O.M.; formal analysis, O.M.; investigation, O.M.; resources, B.M. and W.S.; data curation, O.M.; writing—original draft preparation, O.M.; writing—review and editing, O.M., M.S., B.M. and W.S.; visualization, O.M. and M.S.; supervision, D.D.K. and G.K.M.; project administration, D.D.K. and G.K.M.; funding acquisition, W.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on reasonable request from the corresponding author.

Acknowledgments

The UNITI project was financed by the European Union’s Horizon 2020 Research and Innovation Programme, Grant Agreement Number 848261.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cima, R.F.F.; Mazurek, B.; Haider, H.; Kikidis, D.; Lapira, A.; Noreña, A.; Hoare, D.J. A Multidisciplinary European Guideline for Tinnitus: Diagnostics, Assessment, and Treatment. HNO 2019, 67, 10–42. [Google Scholar] [CrossRef] [Green Version]
  2. Kreuzer, P.M.; Vielsmeier, V.; Langguth, B. Chronic Tinnitus: An Interdisciplinary Challenge. Dtsch. Arztebl. Int. 2013, 110, 278. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Gopinath, B.; McMahon, C.M.; Rochtchina, E.; Karpa, M.J.; Mitchell, P. Incidence, Persistence, and Progression of Tinnitus Symptoms in Older Adults: The Blue Mountains Hearing Study. Ear Hear. 2010, 31, 407–412. [Google Scholar] [CrossRef] [PubMed]
  4. Cima, R.F.F.; Kikidis, D.; Mazurek, B.; Haider, H.; Cederroth, C.R.; Norena, A.; Lapira, A.; Bibas, A.; Hoare, D.J. Tinnitus Healthcare: A Survey Revealing Extensive Variation in Opinion and Practices across Europe. BMJ Open 2020, 10, e029346. [Google Scholar] [CrossRef] [PubMed]
  5. Baguley, D.; McFerran, D.; Hall, D. Tinnitus. Lancet 2013, 382, 1600–1607. [Google Scholar] [CrossRef] [Green Version]
  6. Langguth, B.; Elgoyhen, A.B. Current Pharmacological Treatments for Tinnitus. Expert Opin. Pharmacother. 2012, 13, 2495–2509. [Google Scholar] [CrossRef]
  7. Person, O.C.; Veiga, F.; Junior, A.; Altoé, J.; Portes, L.M.; Lopes, P.R.; Dos, M.E.; Puga, S.; Clayton, O.; Brasileiros De Ciências Da, A.; et al. O Que Revisões Sistemáticas Cochrane Dizem Sobre Terapêutica Para Zumbido? ABCS Heal. Sci. 2022, 47, e022301. [Google Scholar] [CrossRef]
  8. Gallus, S.; Lugo, A.; Garavello, W.; Bosetti, C.; Santoro, E.; Colombo, P.; Perin, P.; La Vecchia, C.; Langguth, B. Prevalence and Determinants of Tinnitus in the Italian Adult Population. Neuroepidemiology 2015, 45, 12–19. [Google Scholar] [CrossRef] [Green Version]
  9. Nondahl, D.M.; Cruickshanks, K.J.; Huang, G.H.; Klein, B.E.K.; Klein, R.; Tweed, T.S.; Zhan, W. Generational Differences in the Reporting of Tinnitus. Ear Hear. 2012, 33, 640. [Google Scholar] [CrossRef] [Green Version]
  10. Hasson, D.; Theorell, T.; Westerlund, H.; Canlon, B. Prevalence and Characteristics of Hearing Problems in a Working and Non-Working Swedish Population. J. Epidemiol. Community Health 2010, 64, 453–460. [Google Scholar] [CrossRef]
  11. McFerran, D.; Hoare, D.J.; Carr, S.; Ray, J.; Stockdale, D. Tinnitus Services in the United Kingdom: A Survey of Patient Experiences. BMC Health Serv. Res. 2018, 18, 110. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Hoare, D.J.; Gander, P.E.; Collins, L.; Smith, S.; Hall, D.A. Management of Tinnitus in English NHS Audiology Departments: An Evaluation of Current Practice. J. Eval. Clin. Pract. 2012, 18, 326–334. [Google Scholar] [CrossRef] [Green Version]
  13. Hall, D.A.; Láinez, M.J.A.; Newman, C.W.; Sanchez, T.; Egler, M.; Tennigkeit, F.; Koch, M.; Langguth, B. Treatment Options for Subjective Tinnitus: Self Reports from a Sample of General Practitioners and ENT Physicians within Europe and the USA. BMC Health Serv. Res. 2011, 11, 302. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. El-Shunnar, S.K.; Hoare, D.J.; Smith, S.; Gander, P.E.; Kang, S.; Fackrell, K.; Hall, D.A. Primary Care for Tinnitus: Practice and Opinion among GPs in England. J. Eval. Clin. Pract. 2011, 17, 684–692. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Cima: TINNET COST Action BM1306-Clinical WG1: Establishme—Google Scholar. Available online: https://scholar.google.com/scholar_lookup?title=Establishment of a standard for Tinnitus%3B patient assessment%2C characterization%2C and treatment options&publication_year=2016&author=Cima%2CRFF&author=Haider%2CH&author=Mazurek%2CB&author=Cederroth%2CCR&author=Lapira%2CA&author=Kikidis%2CD&author=Noreña%2CA (accessed on 15 October 2022).
  16. Elgoyhen, A.B.; Langguth, B.; De Ridder, D.; Vanneste, S. Tinnitus: Perspectives from Human Neuroimaging. Nat. Rev. Neurosci. 2015, 16, 632–642. [Google Scholar] [CrossRef] [PubMed]
  17. Sarafidis, M.; Manta, O.; Kouris, I.; Schlee, W.; Kikidis, D.; Vellidou, E.; Koutsouris, D. Why a Clinical Decision Support System Is Needed for Tinnitus. In Proceedings of the 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Online, 1–5 November 2021; pp. 2075–2078. [Google Scholar] [CrossRef]
  18. Schlee, W.; Schoisswohl, S.; Staudinger, S.; Schiller, A.; Lehner, A.; Langguth, B.; Schecklmann, M.; Simoes, J.; Neff, P.; Marcrum, S.C.; et al. Towards a unification of treatments and interventions for tinnitus patients: The EU research and innovation action UNITI. In Progress in Brain Research; Elsevier B.V.: Amsterdam, The Netherlands, 2021; Volume 260, pp. 441–451. ISBN 9780128215869. [Google Scholar]
  19. Husain, F.T.; Gander, P.E.; Jansen, J.N.; Shen, S. Expectations for Tinnitus Treatment and Outcomes: A Survey Study of Audiologists and Patients. J. Am. Acad. Audiol. 2018, 29, 313–336. [Google Scholar] [CrossRef] [PubMed]
  20. Cima, R.F.F. Stress-related tinnitus treatment protocols. In Tinnitus and Stress: An Interdisciplinary Companion for Healthcare Professionals; Springer: Berlin/Heidelberg, Germany, 2017; pp. 139–172. [Google Scholar] [CrossRef]
  21. Humes, L.E.; Rogers, S.E.; Quigley, T.M.; Main, A.K.; Kinney, D.L.; Herring, C. The Effects of Service-Delivery Model and Purchase Price on Hearing-Aid Outcomes in Older Adults: A Randomized Double-Blind Placebo-Controlled Clinical Trial. Am. J. Audiol. 2017, 26, 53–79. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  22. Manta, O.; Sarafidis, M.; Schlee, W.; Consoulas, C.; Kikidis, D.; Koutsouris, D. Electrophysiological Differences in Distinct Hearing Threshold Level Individuals with and without Tinnitus Distress. In Proceedings of the 2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Glasgow, UK, 11–15 July 2022; Volume 2022, pp. 1630–1633. [Google Scholar] [CrossRef]
  23. Searchfield, G.D.; Zhang, J. The Behavioral Neuroscience of Tinnitus; Springer: Berlin/Heidelberg, Germany, 2021; Volume 51. [Google Scholar] [CrossRef]
  24. Zhang, L.; Wu, C.; Martel, D.T.; West, M.; Sutton, M.A.; Shore, S.E. Noise Exposure Alters Glutamatergic and GABAergic Synaptic Connectivity in the Hippocampus and Its Relevance to Tinnitus. Neural Plast. 2021, 2021, 8833087. [Google Scholar] [CrossRef] [PubMed]
  25. Picton, T.W.; John, M.S.; Purcell, D.W.; Plourde, G. Human Auditory Steady-State Responses: The Effects of Recording Technique and State of Arousal. Anesth. Analg. 2003, 97, 1396–1402. [Google Scholar] [CrossRef]
  26. Paulraj, M.P.; Subramaniam, K.; Bin Yaccob, S.; Bin Adom, A.H.; Hema, C.R. Auditory Evoked Potential Response and Hearing Loss: A Review. Open Biomed. Eng. J. 2015, 9, 17. [Google Scholar] [CrossRef] [Green Version]
  27. Polonenko, M.J.; Maddox, R.K. The Parallel Auditory Brainstem Response. Trends Hear. 2019, 23, 17. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Hall, J.W. (Ed.) Handbook of Auditory Evoked Responses; Pearson Education, Inc.: New York, NY, USA, 2015; ISBN 0205135668. [Google Scholar]
  29. Sörnmo, L.; Laguna, P. Evoked Potentials. In Bioelectrical Signal Processing in Cardiac and Neurological Applications; Academic Press: Cambridge, MA, USA, 2005; pp. 181–336. [Google Scholar] [CrossRef]
  30. Winkler, I.; Denham, S.; Escera, C. Auditory Event-Related Potentials. In Encyclopedia of Computational Neuroscience; Springer: New York, NY, USA, 2013; pp. 1–29. [Google Scholar] [CrossRef] [Green Version]
  31. Young, A.; Cornejo, J.; Spinner, A. Auditory Brainstem Response; StatPearls Publishing: Treasure Island, FL, USA, 2022. [Google Scholar]
  32. Milloy, V.; Fournier, P.; Benoit, D.; Noreña, A.; Koravand, A. Auditory Brainstem Responses in Tinnitus: A Review of Who, How, and What? Front. Aging Neurosci. 2017, 9, 237. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  33. Melcher, J.R.; Kiang, N.Y.S. Generators of the Brainstem Auditory Evoked Potential in Cat. III: Identified Cell Populations. Hear. Res. 1996, 93, 52–71. [Google Scholar] [CrossRef]
  34. Chalak, S.; Kale, A.; Deshpande, V.K.; Biswas, D.A. Establishment of Normative Data for Monaural Recordings of Auditory Brainstem Response and Its Application in Screening Patients with Hearing Loss: A Cohort Study. J. Clin. Diagn. Res. 2013, 7, 2677–2679. [Google Scholar] [CrossRef]
  35. Schoisswohl, S.; Langguth, B.; Schecklmann, M.; Bernal-Robledano, A.; Boecking, B.; Cederroth, C.R.; Chalanouli, D.; Cima, R.; Denys, S.; Dettling-Papargyris, J.; et al. Unification of Treatments and Interventions for Tinnitus Patients (UNITI): A Study Protocol for a Multi-Center Randomized Clinical Trial. Trials 2021, 22, 875. [Google Scholar] [CrossRef] [PubMed]
  36. Watson, D.R. The Effects of Cochlear Hearing Loss, Age and Sex on the Auditory Brainstem Response. Int. J. Audiol. 2007, 35, 246–258. [Google Scholar] [CrossRef] [PubMed]
  37. Konadath, S.; Manjula, P. Auditory Brainstem Response and Late Latency Response in Individuals with Tinnitus Having Normal Hearing. Intractable Rare Dis. Res. 2016, 5, 262–268. [Google Scholar] [CrossRef] [Green Version]
  38. Eggermont, J.J. Auditory Brainstem Response. Handb. Clin. Neurol. 2019, 160, 451–464. [Google Scholar] [CrossRef]
  39. McFadden, D.; Champlin, C.A.; Pho, M.H.; Pasanen, E.G.; Malone, M.M.; Leshikar, E.M. Auditory Evoked Potentials: Differences by Sex, Race, and Menstrual Cycle and Correlations with Common Psychoacoustical Tasks. PLoS ONE 2021, 16, e0251363. [Google Scholar] [CrossRef]
  40. Nahak, S.; Pathak, A.; Saha, G. Fragment-Level Classification of ECG Arrhythmia Using Wavelet Scattering Transform. Expert Syst. Appl. 2023, 224, 120019. [Google Scholar] [CrossRef]
  41. Schlee, W.; Langguth, B.; Pryss, R.; Allgaier, J.; Mulansky, L.; Vogel, C.; Spiliopoulou, M.; Schleicher, M.; Unnikrishnan, V.; Puga, C.; et al. Using Big Data to Develop a Clinical Decision Support System for Tinnitus Treatment. Curr. Top. Behav. Neurosci. 2021, 51, 175–189. [Google Scholar] [PubMed]
  42. Nasreddine, Z.S.; Phillips, N.A.; Bédirian, V.; Charbonneau, S.; Whitehead, V.; Collin, I.; Cummings, J.L.; Chertkow, H. The Montreal Cognitive Assessment, MoCA: A Brief Screening Tool For Mild Cognitive Impairment. J. Am. Geriatr. Soc. 2005, 53, 695–699. [Google Scholar] [CrossRef] [PubMed]
  43. Newman, C.W.; Jacobson, G.P.; Spitzer, J.B. Development of the Tinnitus Handicap Inventory. Arch. Otolaryngol. Neck Surg. 1996, 122, 143–148. [Google Scholar] [CrossRef] [PubMed]
  44. Interacoustics Eclipse EP25 Manuals|ManualsLib. Available online: https://www.manualslib.com/products/Interacoustics-Eclipse-Ep25-11647463.html (accessed on 18 October 2022).
  45. Ballas, A.; Katrakazas, P. Ωto_abR: A Web Application for the Visualization and Analysis of Click-Evoked Auditory Brainstem Responses. Digital 2021, 1, 188–197. [Google Scholar] [CrossRef]
  46. Lang, D.T. Tools for Parsing and Generating XML Within R and S-Plus [R Package XML Version 3.99-0.11]. 2022. Available online: https://cran.r-project.org/web/packages/XML/index.html (accessed on 25 May 2023).
  47. Parse XML [R Package Xml2 Version 1.3.3]. 2021. Available online: https://cran.r-project.org/web/packages/xml2/index.html (accessed on 25 May 2023).
  48. Wickham, H. Ggplot2; Springer: New York, NY, USA, 2016. [Google Scholar] [CrossRef]
  49. R-Forge: Signal: Project Home. Available online: https://r-forge.r-project.org/projects/signal/ (accessed on 18 October 2022).
  50. Sueur, J.; Aubin, T.; Simonis, C. Seewave, a Free Modular Tool For Sound Analysis and Synthesis. Bioacoustics 2012, 18, 213–226. [Google Scholar] [CrossRef]
  51. Analysis of Music and Speech [R Package TuneR Version 1.4.0]. 2022. Available online: https://rdrr.io/cran/tuneR/ (accessed on 25 May 2023).
  52. Van Boxtel, G. Gsignal: Signal Processing. 2021. Available online: https://cran.r-project.org/web/packages/gsignal/gsignal.pdf (accessed on 25 May 2023).
  53. John, D.; Tang, Q.; Albinali, F.; Intille, S. An Open-Source Monitor-Independent Movement Summary for Accelerometer Data Processing. J. Meas. Phys. Behav. 2019, 2, 268–281. [Google Scholar] [CrossRef]
  54. Base Package—RDocumentation. Available online: https://rdocumentation.org/packages/base/versions/3.6.2 (accessed on 18 October 2022).
  55. Fan, S.; Li, S. Objective Detection of Tinnitus Based on Electrophysiology. Brain Sci. 2022, 12, 12. [Google Scholar] [CrossRef]
  56. Manta, O.; Sarafidis, M.; Vasileiou, N.; Schlee, W.; Consoulas, C.; Kikidis, D.; Vassou, E.; Matsopoulos, G.K.; Koutsouris, D.D. Development and Evaluation of Automated Tools for Auditory-Brainstem and Middle-Auditory Evoked Potentials Waves Detection and Annotation. Brain Sci. 2022, 12, 1675. [Google Scholar] [CrossRef]
  57. Fox, J. Applied Regression Analysis and Generalized Linear Models; SAGE Publications: London, UK, 2016; ISBN 1452205663. [Google Scholar]
  58. Fox, J.; Weisberg, S. An R Companion to Applied Regression; SAGE Publications: London, UK, 2019; ISBN 1544336470. [Google Scholar]
  59. RDocumentation. T.Test Function. Available online: https://www.rdocumentation.org/packages/stats/versions/3.6.2/topics/t.test (accessed on 24 October 2022).
  60. Aoki, S. Effect Sizes of the Differences between Means without Assuming Variance Equality and between a Mean and a Constant. Heliyon 2020, 6, e03306. [Google Scholar] [CrossRef] [Green Version]
  61. Marco Torchiano, M. Package “effsize” Type Package Title Efficient Effect Size Computation. 2022. Available online: https://cran.r-project.org/web/packages/effsize/effsize.pdf (accessed on 25 May 2023).
  62. Soro, B.; Lee, C. A Wavelet Scattering Feature Extraction Approach for Deep Neural Network Based Indoor Fingerprinting Localization. Sensors 2019, 19, 1790. [Google Scholar] [CrossRef] [Green Version]
  63. Mallat, S. Group Invariant Scattering. Commun. Pure Appl. Math. 2012, 65, 1331–1398. [Google Scholar] [CrossRef] [Green Version]
  64. Liu, Z.; Yao, G.; Zhang, Q.; Zhang, J.; Zeng, X. Wavelet Scattering Transform for ECG Beat Classification. Comput. Math. Methods Med. 2020, 2020, 3215681. [Google Scholar] [CrossRef] [PubMed]
  65. Bruna, J.; Mallat, S. Invariant Scattering Convolution Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1872–1886. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  66. Wang, J.; Zhang, X.; Gao, Q.; Ma, X.; Feng, X.; Wang, H. Device-Free Simultaneous Wireless Localization & Activity Recognition with Wavelet Feature. IEEE Trans. Veh. Technol. 2017, 66, 1659–1669. [Google Scholar] [CrossRef]
  67. Oyallon, E.; Belilovsky, E.; Zagoruyko, S. Scaling the Scattering Transform: Deep Hybrid Networks. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 5618–5627. [Google Scholar] [CrossRef]
  68. Mallat, S. Understanding Deep Convolutional Networks. Philos. Trans. A. Math. Phys. Eng. Sci. 2016, 374, 20150203. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  69. MATLAB. Understanding Wavelets, Part 5: Machine Learning and Deep Learning with Wavelet Scattering Video. Available online: https://www.mathworks.com/videos/understanding-wavelets-part-5-machine-learning-and-deep-learning-with-wavelet-scattering-1577170399650.html (accessed on 13 October 2022).
  70. Buriro, A.B.; Ahmed, B.; Baloch, G.; Ahmed, J.; Shoorangiz, R.; Weddell, S.J.; Jones, R.D. Classification of Alcoholic EEG Signals Using Wavelet Scattering Transform-Based Features. Comput. Biol. Med. 2021, 139, 104969. [Google Scholar] [CrossRef]
  71. Bruna, J.; Mallat, S. Classification with Scattering Operators. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2011), Colorado Springs, CO, USA, 20–25 June 2011; pp. 1561–1566. [Google Scholar] [CrossRef] [Green Version]
  72. Ahmad, M.Z.; Kamboh, A.M.; Saleem, S.; Khan, A.A. Mallat’s Scattering Transform Based Anomaly Sensing for Detection of Seizures in Scalp EEG. IEEE Access 2017, 5, 16919–16929. [Google Scholar] [CrossRef]
  73. Wavelet Time Scattering for ECG Signal Classification—MATLAB & Simulink Example. Available online: https://www.mathworks.com/help/wavelet/ug/ecg-signal-classification-using-wavelet-time-scattering.html (accessed on 1 November 2022).
  74. Wavelet Scattering—MATLAB & Simulink. Available online: https://www.mathworks.com/help/wavelet/ug/wavelet-scattering.html (accessed on 1 November 2022).
  75. Susu, A.A.; Agboola, H.A.; Solebo, C.; Lesi, F.E.A.; Aribike, D.S. Wavelet Time Scattering Based Classification of Interictal and Preictal EEG Signals. J. Brain Res. 2020, 3, 1–9. [Google Scholar]
  76. Wavelet Time Scattering Classification of Phonocardiogram Data—MATLAB & Simulink Example. Available online: https://www.mathworks.com/help/wavelet/ug/wavelet-time-scattering-classification-of-phonocardiogram-data.html (accessed on 1 November 2022).
  77. Mei, N.; Wang, H.; Zhang, Y.; Liu, F.; Jiang, X.; Wei, S. Classification of Heart Sounds Based on Quality Assessment and Wavelet Scattering Transform. Comput. Biol. Med. 2021, 137, 104814. [Google Scholar] [CrossRef]
  78. British Society of Audiology—BSA. Available online: https://www.thebsa.org.uk/ (accessed on 4 November 2022).
  79. Meikle, M.B.; Henry, J.A.; Griest, S.E.; Stewart, B.J.; Abrams, H.B.; McArdle, R.; Myers, P.J.; Newman, C.W.; Sandridge, S.; Turk, D.C.; et al. The Tinnitus Functional Index: Development of a New Clinical Measure for Chronic, Intrusive Tinnitus. Ear Hear. 2012, 33, 153–176. [Google Scholar] [CrossRef]
  80. Coles: Tinnitus Severity Gradings: Cross-Sectional Studies—Google Scholar. Available online: https://scholar.google.com/scholar_lookup?title=Tinnitus severity gradings%3A cross sectional studies&pages=453-455&publication_year=1991&author=Coles%2CRRA&author=Lutman%2CME&author=Axelsson%2CA&author=Hazell%2CJWP (accessed on 8 September 2021).
  81. Hallam, R.S. Manual of the Tinnitus Questionnaire. 1996. Available online: https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=Hallam+RS+%281996%29+Manual+of+the+tinnitus+questionnaire+%28TQ%29.+Psychological+Corporation%2C+London&btnG= (accessed on 8 September 2021).
  82. Psychological Aspects of Tinnitus. Available online: https://www.researchgate.net/publication/306164435_Psychological_aspects_of_tinnitus (accessed on 14 October 2021).
  83. Fackrell, K.; Fearnley, C.; Hoare, D.J.; Sereda, M. Hyperacusis Questionnaire as a Tool for Measuring Hypersensitivity to Sound in a Tinnitus Research Population. Biomed Res. Int. 2015, 2015, 290425. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  84. Anari, M.; Axelsson, A.; Eliasson, A.; Magnusson, L. Hypersensitivity to Sound--Questionnaire Data, Audiometry and Classification. Scand. Audiol. 1999, 28, 219–230. [Google Scholar] [CrossRef] [PubMed]
  85. Kroenke, K.; Spitzer, R.L.; Williams, J.B.W. The PHQ-9: Validity of a Brief Depression Severity Measure. J. Gen. Intern. Med. 2001, 16, 606. [Google Scholar] [CrossRef] [PubMed]
  86. The World Health Organization. WHOQOL—Measuring Quality of Life. Available online: https://www.who.int/tools/whoqol (accessed on 4 November 2022).
  87. Genitsaridi, E.; Partyka, M.; Gallus, S.; Lopez-Escamez, J.A.; Schecklmann, M.; Mielczarek, M.; Trpchevska, N.; Santacruz, J.L.; Schoisswohl, S.; Riha, C.; et al. Standardised Profiling for Tinnitus Research: The European School for Interdisciplinary Tinnitus Research Screening Questionnaire (ESIT-SQ). Hear. Res. 2019, 377, 353–359. [Google Scholar] [CrossRef]
  88. Malpass, A.; Wiles, N.; Dowrick, C.; Robinson, J.; Gilbody, S.; Duffy, L.; Lewis, G. Usefulness of PHQ-9 in Primary Care to Determine Meaningful Symptoms of Low Mood: A Qualitative Study. Br. J. Gen. Pract. 2016, 66, e78–e84. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  89. Ford, J.; Thomas, F.; Byng, R.; McCabe, R. Use of the Patient Health Questionnaire (PHQ-9) in Practice: Interactions between Patients and Physicians. Qual. Health Res. 2020, 30, 2146–2159. [Google Scholar] [CrossRef] [PubMed]
  90. Bläsing, L.; Goebel, G.; Flötzinger, U.; Berthold, A.; Kröner-Herwig, B. Hypersensitivity to Sound in Tinnitus Patients: An Analysis of a Construct Based on Questionnaire and Audiological Data. Int. J. Audiol. 2010, 49, 518–526. [Google Scholar] [CrossRef]
  91. Tharwat, A.; Gaber, T.; Ibrahim, A.; Hassanien, A.E. Linear Discriminant Analysis: A Detailed Tutorial. AI Commun. 2017, 30, 169–190. [Google Scholar] [CrossRef] [Green Version]
  92. Caie, P.D.; Dimitriou, N.; Arandjelović, O. Precision Medicine in Digital Pathology via Image Analysis and Machine Learning. In Artificial Intelligence and Deep Learning in Pathology; Elsevier: Amsterdam, The Netherlands, 2021; pp. 149–173. [Google Scholar] [CrossRef]
  93. Naive Bayes—Scikit-Learn 1.1.3 Documentation. Available online: https://scikit-learn.org/stable/modules/naive_bayes.html (accessed on 7 December 2022).
  94. Chen, L. Support Vector Machine—Simply Explained towards Data Science. Available online: https://towardsdatascience.com/support-vector-machine-simply-explained-fee28eba5496 (accessed on 7 December 2022).
  95. Knocklein, O. Classification Using Neural Networks towards Data Science. Available online: https://towardsdatascience.com/classification-using-neural-networks-b8e98f3a904f (accessed on 7 December 2022).
  96. Powers, D.M.W. Evaluation: From Precision, Recall and F-Measure to ROC, Informedness, Markedness and Correlation. Int. J. Mach. Learn. Technol. 2020, 2, 37–63. [Google Scholar] [CrossRef]
  97. Huang, J.; Ling, C.X. Using AUC and Accuracy in Evaluating Learning Algorithms. IEEE Trans. Knowl. Data Eng. 2005, 17, 299–310. [Google Scholar] [CrossRef] [Green Version]
  98. Friedman, J.; Hastie, T.; Tibshirani, R. Regularization Paths for Generalized Linear Models via Coordinate Descent. J. Stat. Softw. 2010, 33, 1–22. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  99. Biesinger, E.; Heiden, C.; Greimel, V.; Lendle, T.; Hoing, R.; Albegger, K. Strategies in Ambulatory Treatment of Tinnitus. HNO 1998, 46, 157–169. [Google Scholar] [CrossRef] [PubMed]
  100. Theodoroff, S.M. Tinnitus Questionnaires for Research and Clinical Use. Curr. Top. Behav. Neurosci. 2021, 51, 403–418. [Google Scholar] [CrossRef]
  101. Strategien in Der Ambulaten Behandlung Des Tinnitus. Available online: https://www.infona.pl/resource/bwmeta1.element.springer-4546a6c9-8888-3eef-af2d-3c006bff692d (accessed on 8 September 2021).
  102. Tinnitus—Ear, Nose, and Throat Disorders—MSD Manual Professional Edition. Available online: https://www.msdmanuals.com/professional/ear,-nose,-and-throat-disorders/approach-to-the-patient-with-ear-problems/tinnitus?query=tinnitus (accessed on 17 November 2022).
  103. Musiek, F.; Nagle, S. The Middle Latency Response: A Review of Findings in Various Central Nervous System Lesions. J. Am. Acad. Audiol. 2018, 29, 855–867. [Google Scholar] [CrossRef] [PubMed]
  104. Cardon, E.; Joossen, I.; Vermeersch, H.; Jacquemin, L.; Mertens, G.; Vanderveken, O.M.; Topsakal, V.; Van De Heyning, P.; Van Rompaey, V.; Gilles, A. Systematic Review and Meta-Analysis of Late Auditory Evoked Potentials as a Candidate Biomarker in the Assessment of Tinnitus. PLoS ONE 2020, 15, e0243785. [Google Scholar] [CrossRef]
  105. De Azevedo, A.A.; Figueiredo, R.R.; Penido, N. de O. Tinnitus and Event Related Potentials: A Systematic Review. Braz. J. Otorhinolaryngol. 2020, 86, 119–126. [Google Scholar] [CrossRef]
  106. Dos Santos Filha, V.A.V.; Samelli, A.G.; Matas, C.G. Middle Latency Auditory Evoked Potential (MLAEP) in Workers with and without Tinnitus Who Are Exposed to Occupational Noise. Med. Sci. Monit. 2015, 21, 2701–2706. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Typical annotated ABR signal, presenting the five waves of interest, from I to V (red waveform), and AMLR signal, presenting the four waves of interest: Na, Pa, Nb, and Pb (blue waveform).
Figure 1. Typical annotated ABR signal, presenting the five waves of interest, from I to V (red waveform), and AMLR signal, presenting the four waves of interest: Na, Pa, Nb, and Pb (blue waveform).
Jcm 12 03843 g001
Figure 2. Overall Study Workflow.
Figure 2. Overall Study Workflow.
Jcm 12 03843 g002
Figure 3. Schematic illustrating the feature extraction from AEP signals using second-order WST. S0x, S1x, and S2x, denote the 0th (time-averaged or low-pass filtered), 1st, and 2nd order scattering coefficients of WST, respectively. * and |.| represent the convolution and modulus operators, respectively.
Figure 3. Schematic illustrating the feature extraction from AEP signals using second-order WST. S0x, S1x, and S2x, denote the 0th (time-averaged or low-pass filtered), 1st, and 2nd order scattering coefficients of WST, respectively. * and |.| represent the convolution and modulus operators, respectively.
Jcm 12 03843 g003
Figure 4. Boxplots of ABR waveform latencies based on tinnitus distress (in purple: severe/high tinnitus distress; in green: mild/moderate tinnitus distress; asterisks indicate significance: ** p-value ≤ 0.01; *** p-value ≤ 0.001).
Figure 4. Boxplots of ABR waveform latencies based on tinnitus distress (in purple: severe/high tinnitus distress; in green: mild/moderate tinnitus distress; asterisks indicate significance: ** p-value ≤ 0.01; *** p-value ≤ 0.001).
Jcm 12 03843 g004
Figure 5. Boxplots of ABR waveform amplitudes based on tinnitus distress (in purple: severe/high tinnitus distress; in green: mild/moderate tinnitus distress; asterisks indicate significance: * p-value ≤ 0.05; *** p-value ≤ 0.001).
Figure 5. Boxplots of ABR waveform amplitudes based on tinnitus distress (in purple: severe/high tinnitus distress; in green: mild/moderate tinnitus distress; asterisks indicate significance: * p-value ≤ 0.05; *** p-value ≤ 0.001).
Jcm 12 03843 g005
Figure 6. Boxplots of AMLR waveform latencies based on tinnitus distress (in purple: severe/high tinnitus distress; in green: mild/moderate tinnitus distress; asterisks indicate significance: * p-value ≤ 0.05; ** p-value ≤ 0.01).
Figure 6. Boxplots of AMLR waveform latencies based on tinnitus distress (in purple: severe/high tinnitus distress; in green: mild/moderate tinnitus distress; asterisks indicate significance: * p-value ≤ 0.05; ** p-value ≤ 0.01).
Jcm 12 03843 g006
Figure 7. Boxplots of AMLR waveform amplitudes based on tinnitus distress (in purple: severe/high tinnitus distress; in green: mild/moderate tinnitus distress; asterisks indicate significance: ** p-value ≤ 0.01; *** p-value ≤ 0.001).
Figure 7. Boxplots of AMLR waveform amplitudes based on tinnitus distress (in purple: severe/high tinnitus distress; in green: mild/moderate tinnitus distress; asterisks indicate significance: ** p-value ≤ 0.01; *** p-value ≤ 0.001).
Jcm 12 03843 g007
Figure 8. (a) Plot of the wavelet filters used in the first and second filter banks; (b) plot of the Littlewood–Paley sums of the filter banks.
Figure 8. (a) Plot of the wavelet filters used in the first and second filter banks; (b) plot of the Littlewood–Paley sums of the filter banks.
Jcm 12 03843 g008
Figure 9. The ROC curve of the classifier using the 11 selected AEP metrics in time domain as features.
Figure 9. The ROC curve of the classifier using the 11 selected AEP metrics in time domain as features.
Jcm 12 03843 g009
Figure 10. The ROC curve of the classifier with the highest AUC value for the models using the ABR WST coefficients as features.
Figure 10. The ROC curve of the classifier with the highest AUC value for the models using the ABR WST coefficients as features.
Jcm 12 03843 g010
Figure 11. The ROC curve of the classifier with the highest AUC value for the models using the AMLR WST coefficients as features.
Figure 11. The ROC curve of the classifier with the highest AUC value for the models using the AMLR WST coefficients as features.
Jcm 12 03843 g011
Figure 12. The ROC curve of the classifier with the highest AUC value for the models using both AEP subtypes’ WST coefficients as features.
Figure 12. The ROC curve of the classifier with the highest AUC value for the models using both AEP subtypes’ WST coefficients as features.
Jcm 12 03843 g012
Figure 13. The ROC curve of the classifier with the highest AUC value for the models using all 33 clinical characteristics as features.
Figure 13. The ROC curve of the classifier with the highest AUC value for the models using all 33 clinical characteristics as features.
Jcm 12 03843 g013
Figure 14. The ROC curve of the classifier with the highest AUC value for the models using the 15 LASSO-selected clinical characteristics as features.
Figure 14. The ROC curve of the classifier with the highest AUC value for the models using the 15 LASSO-selected clinical characteristics as features.
Jcm 12 03843 g014
Figure 15. The ROC curve of the classifier with the highest AUC value for the models using the AMLR WST coefficients in combination with the 15 LASSO-selected clinical characteristics as features.
Figure 15. The ROC curve of the classifier with the highest AUC value for the models using the AMLR WST coefficients in combination with the 15 LASSO-selected clinical characteristics as features.
Jcm 12 03843 g015
Table 1. Inclusion and exclusion criteria of the UNITI’s project RCT.
Table 1. Inclusion and exclusion criteria of the UNITI’s project RCT.
Inclusion Criteria
  • Tinnitus as primary complaint;
  • Chronic tinnitus (for at least 6 months based on history);
  • Age 18–80 years;
  • Ability to understand and consent to the research requirements, and to participate (hearing ability, intellectual capacity, no plans for sabbaticals or long-term holidays, no plans for pregnancy);
  • A score of >22 on the Montreal Cognitive Assessment (MoCA), i.e., adults without mild cognitive impairment [42];
  • Ability and willingness to use the UNITI mobile applications on their smartphones;
  • A score of ≥18 in the Tinnitus Handicap Inventory (THI) [43];
  • Willing to use a hearing aid (if there was indication);
  • If a drug therapy with psychoactive substances (e.g., antidepressants, anticonvulsants) existed at the beginning of the therapeutic intervention, it must have been stable for at least 30 days. The therapy should remain constant during the whole study, but a necessary change was not an exclusion criterion. Any change in medication was documented in the case report form (CRF).
Exclusion Criteria
  • Objective tinnitus/heartbeat-synchronous tinnitus as primary complaint;
  • Start of any other tinnitus-related treatments, especially hearing aids (HA), structured counselling, sound therapy (with special devices, expecting long term effects) or cognitive behavioural therapy in the last 3 months before the start of the study *;
  • Otosclerosis/acoustic neuroma or other relevant ear disorders with fluctuating hearing;
  • Present acute infections (e.g., acute otitis media, otitis externa, acute sinusitis);
  • Meniere’s disease or similar syndromes (but not vestibular migraine);
  • Serious internal, neurological, or psychiatric conditions;
  • Epilepsy or other CNS disorders (e.g., brain tumour, encephalitis);
  • Clinically relevant drug, medication or alcohol abuse up to 12 weeks before the start of the study;
  • Missing written informed consent;
  • Severe hearing loss—inability to communicate properly in the course of the study; 70 dB hearing level at 4 kHz (deviations were possible if there was a clinical justification for it);
  • One deaf ear.
* If an HA has already been worn for three months before screening, eligible candidates were allowed to participate but were automatically assigned to the no HA indication group.
Table 2. Stimulus and acquisition parameters for ABR and AMLR recordings.
Table 2. Stimulus and acquisition parameters for ABR and AMLR recordings.
ABR
Stimulus ParametersAcquisition Parameters
Type of transducerInsert phoneAnalysis time15 ms
Sample rate30 kHzSweeps4000
Type of stimulusClickModeMonaural
PolarityAlternateElectrode montageVertical (Fpz, Cz, M1/M2)
Repetition rateStimuli per second: 22 HzFilter setting for
input amplifier
Low Pass: 1500 Hz;
high Pass: 33 Hz, 6 dB per octave
Intensity80 dB nHLPreliminary display
settings
Low pass: 1500 Hz;high Pass: 150 Hz
MaskingOff
AMLR
Stimulus ParametersAcquisition Parameters
Type of transducerInsert phoneAnalysis time150 ms
Sample rate3 kHzSweeps500
Type of stimulus2 kHz Tone Burst,
Manual window
ModeMonaural
Duration of stimulustotal of 28 sine waves;
rise/fall: 4; plateau: 20
Electrode montageVertical (Fpz, Cz, M1/M2)
PolarityRarefactionFilter setting for
input amplifier
Low Pass: 1500 Hz;
high Pass: 10 Hz, 12 dB per octave
Repetition rateStimuli per second: 6.1 HzPreliminary display
settings
Low pass: 100 Hz;
high Pass: 15 Hz
Intensity70 dB nHL
MaskingOff
Table 3. WST dataset variables.
Table 3. WST dataset variables.
Number of FeaturesDescriptionType of Values
40ABR scattering coefficientsNumeric
65AMLR scattering coefficientsNumeric
105ABR and AMLR scattering coefficientsNumeric
Table 4. Selected AEP metric features in time domain.
Table 4. Selected AEP metric features in time domain.
DescriptionRange of Values
1III peak latencyNumeric
2V peak latencyNumeric
3I peak amplitudeNumeric
4V peak amplitudeNumeric
5Pa peak latencyNumeric
6Nb trough latencyNumeric
7Pb peak latencyNumeric
8Na trough amplitudeNumeric
9Pa peak amplitudeNumeric
10Nb trough amplitudeNumeric
11Pb peak amplitudeNumeric
Table 5. Performance results of the classification models using the 11 selected AEP metrics in time domain as features. Bold: the one with the highest AUC value.
Table 5. Performance results of the classification models using the 11 selected AEP metrics in time domain as features. Bold: the one with the highest AUC value.
Machine Learning ClassifierNo of FeaturesAUCSensitivitySpecificity
LDA110.72130.78790.5411
Linear SVM110.71310.78950.5171
NB110.74820.73940.6410
NN110.73000.72200.6578
Poly SVM110.74620.83590.5516
Radial SVM110.76670.77620.6619
RF110.85320.80970.7005
Table 6. Performance results of the classification models using the ABR WST coefficients as features. Bold: the one with the highest AUC value.
Table 6. Performance results of the classification models using the ABR WST coefficients as features. Bold: the one with the highest AUC value.
Machine-Learning ClassifierNo. of FeaturesAUCSensitivitySpecificity
LDA400.79700.77430.6639
Linear SVM400.80230.78970.6322
NB400.65220.47670.7234
NN400.79860.76090.6796
Poly SVM400.80550.83020.6729
Radial SVM400.73310.74980.5858
RF400.79230.77620.6819
Table 7. Performance results of the classification models using the AMLR WST coefficients as features. Bold: the one with the highest AUC value.
Table 7. Performance results of the classification models using the AMLR WST coefficients as features. Bold: the one with the highest AUC value.
Machine-Learning ClassifierNo. of FeaturesAUCSensitivitySpecificity
LDA650.87150.80090.7849
Linear SVM650.84590.81220.7374
NB650.74640.65600.7450
NN650.85950.81670.7722
Poly SVM650.88160.83350.7711
Radial SVM650.88360.82570.7618
RF650.89130.81920.8101
Table 8. Performance results of the classification models using both AEP subtypes’ WST coefficients as features. Bold: the one with the highest AUC value.
Table 8. Performance results of the classification models using both AEP subtypes’ WST coefficients as features. Bold: the one with the highest AUC value.
Machine-Learning ClassifierNo. of FeaturesAUCSensitivitySpecificity
LDA1050.89720.84860.8161
Linear SVM1050.89020.86220.8040
NB1050.74410.62930.7488
NN1050.89260.82950.8158
Poly SVM1050.89660.85260.7934
Radial SVM1050.87030.82340.7418
RF1050.89030.82600.7887
Table 9. Performance results of the classification models using all 33 clinical characteristics as features. Bold: the one with the highest AUC value.
Table 9. Performance results of the classification models using all 33 clinical characteristics as features. Bold: the one with the highest AUC value.
Machine-Learning ClassifierNo. of FeaturesAUCSensitivitySpecificity
LDA330.69810.73100.5495
Linear SVM330.68340.77890.4508
NB330.69280.77710.4466
NN330.53680.72560.3154
Poly SVM330.73060.74960.5667
Radial SVM330.72210.71160.6150
RF330.70340.71570.5612
Table 10. Performance results of the classification models using the 15 LASSO-selected clinical characteristics as features. Bold: the one with the highest AUC value.
Table 10. Performance results of the classification models using the 15 LASSO-selected clinical characteristics as features. Bold: the one with the highest AUC value.
Machine-Learning ClassifierNo. of FeaturesAUCSensitivitySpecificity
LDA150.75600.74600.6374
Linear SVM150.76490.78190.6159
NB150.75480.77920.5662
NN150.73900.59290.7342
Poly SVM150.77270.77320.6680
Radial SVM150.74930.73920.6261
RF150.71860.75050.5867
Table 11. Performance results of the classification models using the AMLR WST coefficients in combination with the 15 LASSO-selected clinical characteristics as features. Bold: the one with the highest AUC value.
Table 11. Performance results of the classification models using the AMLR WST coefficients in combination with the 15 LASSO-selected clinical characteristics as features. Bold: the one with the highest AUC value.
Machine-Learning ClassifierNo. of FeaturesAUCSensitivitySpecificity
LDA800.88860.81280.7690
Linear SVM800.90870.85910.8201
NB800.80980.70530.7981
NN800.90750.83670.8395
Poly SVM800.92500.86470.8599
Radial SVM800.92530.84840.8304
RF800.92400.85980.8209
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Manta, O.; Sarafidis, M.; Schlee, W.; Mazurek, B.; Matsopoulos, G.K.; Koutsouris, D.D. Development of Machine-Learning Models for Tinnitus-Related Distress Classification Using Wavelet-Transformed Auditory Evoked Potential Signals and Clinical Data. J. Clin. Med. 2023, 12, 3843. https://doi.org/10.3390/jcm12113843

AMA Style

Manta O, Sarafidis M, Schlee W, Mazurek B, Matsopoulos GK, Koutsouris DD. Development of Machine-Learning Models for Tinnitus-Related Distress Classification Using Wavelet-Transformed Auditory Evoked Potential Signals and Clinical Data. Journal of Clinical Medicine. 2023; 12(11):3843. https://doi.org/10.3390/jcm12113843

Chicago/Turabian Style

Manta, Ourania, Michail Sarafidis, Winfried Schlee, Birgit Mazurek, George K. Matsopoulos, and Dimitrios D. Koutsouris. 2023. "Development of Machine-Learning Models for Tinnitus-Related Distress Classification Using Wavelet-Transformed Auditory Evoked Potential Signals and Clinical Data" Journal of Clinical Medicine 12, no. 11: 3843. https://doi.org/10.3390/jcm12113843

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop