Next Article in Journal
The Case for Octopus Consciousness: Temporality
Next Article in Special Issue
Long COVID and the Autonomic Nervous System: The Journey from Dysautonomia to Therapeutic Neuro-Modulation through the Retrospective Analysis of 152 Patients
Previous Article in Journal / Special Issue
Alcohol Deprivation Differentially Changes Alcohol Intake in Female and Male Rats Depending on Early-Life Stressful Experience
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Rethinking the Methods and Algorithms for Inner Speech Decoding and Making Them Reproducible

Embedded Intelligent Systems LAB, Machine Learning, Department of Computer Science, Electrical and Space Engineering, Luleå University of Technology, 97187 Luleå, Sweden
*
Author to whom correspondence should be addressed.
NeuroSci 2022, 3(2), 226-244; https://doi.org/10.3390/neurosci3020017
Submission received: 1 March 2022 / Revised: 11 April 2022 / Accepted: 12 April 2022 / Published: 19 April 2022

Abstract

:
This study focuses on the automatic decoding of inner speech using noninvasive methods, such as Electroencephalography (EEG). While inner speech has been a research topic in philosophy and psychology for half a century, recent attempts have been made to decode nonvoiced spoken words by using various brain–computer interfaces. The main shortcomings of existing work are reproducibility and the availability of data and code. In this work, we investigate various methods (using Convolutional Neural Network (CNN), Gated Recurrent Unit (GRU), Long Short-Term Memory Networks (LSTM)) for the detection task of five vowels and six words on a publicly available EEG dataset. The main contributions of this work are (1) subject dependent vs. subject-independent approaches, (2) the effect of different preprocessing steps (Independent Component Analysis (ICA), down-sampling and filtering), and (3) word classification (where we achieve state-of-the-art performance on a publicly available dataset). Overall we achieve a performance accuracy of 35.20% and 29.21% when classifying five vowels and six words, respectively, in a publicly available dataset, using our tuned iSpeech-CNN architecture. All of our code and processed data are publicly available to ensure reproducibility. As such, this work contributes to a deeper understanding and reproducibility of experiments in the area of inner speech detection.

1. Introduction

Thought is strongly related to inner speech [1,2], through a voice being inside the brain that does not actually speak. Inner speech, although not audible, occurs when reading, writing, and even when idle (i.e., “mind-wandering” [3]). Moreover, inner speech follows the same pattern, e.g., regional accents, as if the person is actually speaking aloud, for example [4]. This work focuses on inner speech decoding.
While inner speech has been a research topic in the philosophy of psychology since the second half of the 20th century [5], with results showing that the part of the brain responsible for the generation of inner speech is the frontal gyri, including Broca’s area, the supplementary motor area, and the precentral gyrus, the automatic detection of inner speech has very recently become a popular research topic [6,7]; however, a core challenge of this research is to go beyond the closed vocabulary decoding of words and integrate other language domains (e.g., phonology and syntax) to reconstruct the entire speech stream.
In this work, we conducted extensive experiments using deep learning methods to decode five vowels and six words on a publicly available electroencephalography (EEG) dataset [8]. The backbone CNN architecture used in this work is based on the work of Cooney et al. [7].
The main contributions of this work are as follows: (i) providing code for reproducing the reported results, (ii) subject dependent vs. subject-independent approaches, (iii) the effect of different preprocessing steps (ICA, down-sampling, and filtering), and (iv) achieving state-of-the-art performance on the six word classification task reporting a mean accuracy of 29.21% for all subjects on a publicly available dataset [8].

State-of-the-Art Literature

Research studies in inner speech decoding use data of invasive (e.g., Electrocorticography (ECoG) [9,10]) and non-invasive methods (e.g., Magnetoencephalography (MEG) [11,12], functional Magnetic Resonance Imaging (fMRI) [13], Functional Near-Infrared Spectroscopy (fNIRS) [14,15]) with EEG being the most dominate modality used so far [16]. Martin et al. [10] attempted to detect single words from inner speech using ECoG recordings from inner and outer speech. This study included six word pairs and achieved a binary classification accuracy of 58% using a Support Vector Machine (SVM). ECoG is not scalable as it is invasive but it advances our understanding and limit of decoding inner speech research. Recent methods used a CNN with the “MEG-as-an-image” [12] and “EEG-as-raw-data” [7,17] inputs.
The focus of this paper is on inner speech decoding in terms of the classification of the words and vowels. Classified words can be useful in many scenarios of human–computer communication, e.g., in smart homes or health-care devices, were the human wants to give simple commands via brain signals in a natural way. For human-to-human communication, the ultimate goal of inner speech decoding (in terms of representation learning) is often to synthesize speech [18,19]. In this related area, [18] uses a minimal invasive method called stereotactic EEG (sEEG) with one subject and 100 Dutch words in an open-loop stage for training the decoding models and close-loop stage to evaluate in real time the imagined and whispered speech. The attempt, although not yet intelligible, provides a proof of concept for tackling the close-loop synthesis of imagined speech in real time. Ref. [19] uses MEG data from seven subjects, using, as stimuli, five phrases (1. Do you understand me, 2. That’s perfect, 3. How are you, 4. Good-bye, and 5. I need help.), and two words (yes/no). They follow a subject-dependent approach, where they train and tune a different model per subject. Using a bidirectional long short-term memory recurrent neural network, they achieve a correlation score of the reconstructed speech envelope of 0.41 for phrases and 0.77 for words.
Ref. [15] reported an average classification accuracy of 70.45 ± 19.19% for a binary word classification task using Regularized Linear Discriminant Analysis (RLDA) using fNIRS data. The EEGNet [20] is a CNN-based deep learning architecture for EEG signal analysis that includes a series of 2D convolutional layers, average pooling layers, and batch normalization layers with activations. Finally, there is a fully connected layer at the end of the network to classify the learned representations from the preceding layers. The EEGNet serves as the backbone network in our model; however, the proposed model extends the EEGNet in similar manner to [7].
There are two main approaches when it comes to brain data analysis: subject dependent and subject independent (see Table 1). In the subject-dependent approach, the analysis is taken for each subject individually and performance is reported per subject. Representative studies in the subject-dependent approach are detailed in the following paragraph. Ref. [8] reported a mean recognition rate of 22.32% in classifying five vowels and 18.58% in classifying six words using a Random Forest (RF) algorithm with a subject-dependent approach. Using the data from six subjects, Ref. [21] reported an average accuracy of 50.1 % ± 3.5 % for the three-word classification problem and 66.2 % ± 4.8 % for a binary classification problem (long vs. short words), following a subject-dependent approach using a Multi-Class Relevance Vector Machine (MRVM). In [12], MEG data from inner and outer speech was used; an average accuracy of 93% for the inner speech and 96% for the outer speech decoding of five phrases in a subject-dependent approach using a CNN was reported. Recently, Ref. [22] reported an average accuracy of 29.7% for a four-word classification task on a publicly available dataset of inner speech [23]. In the subject-independent approach, all subjects are taken into account and the performance is reported using the data of all subjects; therefore the generated decoding model can generalize the new subjects’ data. The following studies use a subject-independent approach. In [6], the authors reported an overall accuracy of 90% on the binary classification of vowels compared with consonants using Deep-Belief Networks (DBN) and the combination of all modalities (inner and outer speech), in a subject-independent approach. In [7], the authors used a CNN with transfer learning to analyze inner speech on the EEG dataset of [8]. In these experiments, the CNN was trained on the raw EEG data of all subjects but one. A subset of the remaining subject’s data was used to finely tune the CNN and the rest of the data were used to test the CNN model. They authors reported an overall accuracy of 35.68% (five-fold cross-validation) for the five-vowel classification task.

2. Materials and Methods

2.1. Dataset and Experimental Protocol

The current work used a publicly available EEG dataset as described in [8]. This dataset includes recordings from 15 subjects using their inner and outer speech to pronounce 5 vowels (/a/, /e/, /i /, /o/, /u/) and 6 words (arriba/up, abajo/down, derecha/right, izquierda/left, adelante/forward, and atr ás/backwards). A total of 3316 and 4025 imagined speech sample EEG recordings for vowels and words, respectively, are available in the dataset. An EEG with 6 electrodes was used in these recordings.
Figure 1 shows the experimental design followed in [8]. The experimental protocol consisted of a ready interval that was presented for 2 s, followed by the stimulus (vowel or word) presented for 2 s. The subjects were asked to use their inner or outer speech during the imagine interval to pronounce the stimulus. Finally, a rest interval of 4 s was presented, indicating that the subjects could move or blink their eyes before proceeding with the next stimulus. It is important to note that for the purpose of our study, only the inner speech part of the experiment was used.

2.2. Methods

The proposed framework uses a deep CNN to extract representations from the input EEG signals. Before applying the proposed CNN, the signals are preprocessed and then the CNN network is trained on the preprocessed signals.
Figure 2 depicts the flow of the proposed work. Separate networks are trained for vowels and words following the architecture depicted in Figure 2. The proposed network is inspired by Cooney et al. [7]; they performed filtering, downsampling, and artifact removal before applying the CNN; however, we have noticed that downsampling degrades the recognition performance, see Section 4. As a result, we did not downsample the signals in our experiments. The downsampling block is represented by a cross in Figure 2 to indicate that this task is not included in our proposed system in comparison with [7]. The current work reports results on 3 different experimental approaches using preprocessed data and raw data. The 3 different approaches are discussed in detail in Section 3.1.1 and Section 3.1.2. More information about the preproccessing techniques can be found in Section 2.3.

2.3. Preprocessing

In the current work, we apply the following preprocessing steps:
  • Filtering: A frequency between 2 Hz and 40 Hz is used for filtering [8].
  • Down-sampling: The filtered data are down-sampled to 128 HZ. The original frequency of the data is 1024 Hz.
  • Artifact removal: Independent component analysis (ICA) is known as a blind-source separation technique. When recording a multi-channel signal, the advantages of employing ICA become most obvious. ICA facilitates the extraction of independent components from mixed signals by transforming a multivariate random signal. Here, ICA applied to identify components in EEG signal that include artifacts such as eye blinks or eye movements. These are components then filtered out before the data are translated back from the source space to the sensor space. ICA effectively removes noise from the EEG data and is, therefore, an aid to classification. Given the small number of channels, we intact all the channels and instead use ICA [25] for artifact removal (https://github.com/pierreablin/picard/blob/master/matlab_octave/picard.m, accessed on 27 February 2022).
Figure A3 (see Appendix C) depicts the preprocessed signal after applying ICA. This figure shows the vowel a for two subjects. From this figure, it can be noted that the subject’s model is not discriminative enough as overlapping is observed. The response from all electrodes’ behavior for all vowels for Subject-02 can be seen in Figure A4 (see Appendix C). From this figure, it can be seen that all electrodes are adding information as they all differ in their characteristics.

2.4. iSpeech-CNN Architecture

In this section, we introduce the proposed CNN-based iSpeech architecture. After extensive experiments on the existing CNN architecture for inner speech classification tasks, we determined that downsampling the signal has an effect on the accuracy of the classification and thus removed it from the proposed architecture. The iSpeech-CNN architecture for imagined vowel and word recognition is shown in Figure 3. The same architecture is used in training for imagined vowels and words separately. The only difference is that the network for vowels has five classes; therefore, the softmax layer outputs five probability scores; one for each vowel. In the same manner, the network for words has six classes; therefore, the softmax layer outputs six probability scores; one for each word. Unlike [7], after extensive experimentation, we observed that the number of filters has an effect on the overall performance of the system; 40 filters are used in the first four layers of both networks. The next three layers have 100, 250, and 500 filters, respectively; however, the filter sizes are different. Filters of sizes ( 1 × 5 ), ( 6 × 1 ), ( 1 × 5 ), ( 1 × 3 ), ( 1 × 3 ), ( 1 × 3 ), and ( 1 × 3 ) are used in the first, second, third, fourth, fifth, sixth, and seventh layers, respectively.
We used an Adam optimizer with a dropout of 0.0002 for the vowel classification and 0.0001 for the word classification. As the network is very small, dropping out more features will adversely affect the performance. The initial learning rate was fixed to 0.001 with a p i e c e w i s e learning rate scheduler. Our network was trained for 60 epochs, and the best validation loss was chosen for the final network. The regularization was also fixed to a value of 0.001. Our proposed iSpeech-CNN architecture follows the same structure as [7] but with a different numbers of filters and training parameters and preprocessing.

3. Experimental Approaches and Performance Measures

This section describes the experimental approaches that have been utilized for the analysis of EEG data and the performance measures that quantify the obtained analysis.

3.1. Experimental Approaches

Three experimental approaches were used for analysis, and they are discussed in detail in the following subsections.

3.1.1. Subject-Dependent/Within-Subject Approach

Subject-dependent/within-subject classification is a baseline approach that is commonly used for the analysis of inner speech signals. In this approach, individual models are trained corresponding to each subject and for each subject, a separate model is created. The training, validation, and testing sets all have data from the same subject. This approach essentially measures how much an individual subject’s data changes (or varies) over time.
To divide the subject data into training, testing, and validation datasets, a ratio of 80-10-10 is used. The training, validation, and testing datasets contain all vowel/words category samples (five/six, respectively) in the mentioned ratio. To remove the bias towards the samples, five different trials are utilized. Furthermore, the mean accuracy and standard deviation are reported for all experimental approaches.

3.1.2. Subject Independent: Leave-One-Out Approach

The subject-dependent approach does not show generalization capability as it models one subject at a time (Testing data contain only samples of the subject that is being modeled). The leave-one-out approach is an independent approach where data of each subject are tested using models that are trained using the data of all other subjects but one, i.e., n 1 subjects out of total n will be used for the training model, and the rest will be used for testing. For example, Model-01 will be trained with data from subjects except Subject01, and will be tested with Subject01 (see Table A3, Table A4, Table A5 and Table A6).
This approach helps to obtain a deeper analysis when there are fewer subjects or entities and shows how each individual subject affects the overall estimate of the rest of the subjects. Hence, this approach may provide more generalizable remarks than subject-specific models that depend on individual models.

3.1.3. Mixed Approach

The mixed approach is a variation of subject-independent approach. Although leave-one-out is truly independent, we can see the mixed approach as less independent in nature as it includes data from all subjects in training, validation, and testing. As it contains the data of all subjects, we called it the mixed approach. This approach differs from the within-subject and leave-one-out approaches, where n models correspond to the total number of subjects in the data, are trained. In this approach, only one model will be trained for all subjects. Testing contains samples of all the subjects under all categories (vowels/words).
To run this experiment, 80% of the samples of all the subjects are included in the training set, 10% in the validation set, and the remaining in the test set. We also ensure class balancing, i.e., each class will have approximately the same number of samples of all vowel/word categories. The same experiment is repeated for five random trials, and the mean accuracy along with the standard deviation is reported.

3.2. Performance Measures

The mean and standard deviation are used to report the performance of all the approaches. For the final results, the F-scores are also given.
Mean: The mean is the average of a group of scores. The scores are totaled and then divided by the number of scores. The mean is sensitive to extreme scores when the population samples are small.
Standard deviation: In statistics, the standard deviation (SD) is a widely used measure of variability. It depicts the degree of deviation from the average (mean). A low SD implies that the data points are close to the mean, whereas a high SD suggests that the data span a wide range of values.
F-score: The F-score is a measure of a model’s accuracy that is calculated by combining the precision and recall of the model. It is calculated by the following formula:
F - score = 2 Precision Recall Precision + Recall
where precision is the percentage of true positive examples among the positive examples classified by the model, and recall is the fraction of examples classified as positive, among the total number of positive examples.

4. Results and Discussion: Vowels (Five Classes)

The results estimated with the subject-specific approach are discussed first as this approach is common in most of the EEG-related papers. All code, raw data, and preprocessed data are provided on Github (https://github.com/LTU-Machine-Learning/Rethinking-Methods-Inner-Speech, accessed on 27 February 2022). Related approaches are discussed in later subsections.

4.1. Subject-Dependent/Within-Subject Classification

In this section, we report the results when applying the subject-dependent approach. Figure 4, Figure 5 and Figure A1 and Table A5 show the results of our proposed iSpeech-CNN architecture. Table A1 and Table A2 show the results of the reference CNN architecture.

4.1.1. Ablation Study—Influence of Downsampling

Table A1 shows the results with raw and downsampled data when used within the referenced CNN architecture framework.
It is clearly observed from Table A1 that downsampling the signals results in a loss of information. Figure 4 shows that there is a significant performance increase between 32 and 1024; however, some other differences (e.g., for 40 filters between 128 and 1024) are not significant. For clarity, the bars for standard error for each data point are added. The highest vowel recognition performance (35.20%) is observed at the highest sampling rate (1024), i.e., without downsampling.
In other words, the chosen sampling rate was not sufficient to retain the original information; therefore, further results will be reported for both raw data and downsampled data, in order to obtain a better insight into the preprocessing (i.e., filtering and ICA) stage.

4.1.2. Ablation Study—Influence of Preprocessing

Filtering and artifact removal plays an important role while analyzing the EEG signals. We applied both bandpass filtering (see Section 2), and picard (preconditioned ICA for real data) for artifact removal to obtain more informative signals. Table A2 shows the results of preprocessing when applied on the raw and downsampled data within the reference CNN architecture framework. The performance, i.e., the overall mean accuracy, decreased from 32.51% to 30.90%. The following points can be noted from Table A2: (1) Filtering and artifact removal highly influence the performance irrespective of raw and downsampled data. (2) The improved performance can also be observed with respect to each subject. A smaller standard deviation can also be seen. (3) The CNN framework generated higher performance than the handcrafted features and the GRU (see Table 2). We also performed experiments with the LSTM classifier and noticed the random behavior (theoretical chance accuracies); no significant difference as compared to GRU; therefore, iSpeech-CNN performs best among all classifiers.

4.1.3. Ablation Study—Influence of Architecture

Based on the CNN literature in the EEG paradigm [7,26], adding more layers to the reference CNN architecture does not help to obtain an improved performance; however, by changing the number of filters in the initial layers, some improvements can be observed. Based on the CNN literature for EEG signals, having a sufficient number of filters in the initial layers helps to obtain some improvement [7,27]. Here, we choose three initial layers, unlike in natural images, in speech, initial layers are more specific to the task rather than the last few layers. The results with a changing number of filters in the initial layer within the iSpeech-CNN architecture are shown in Table A5. In the reference CNN architecture, this filter number was 20 for the initial three layers; however, we have changed this number to 40 (decided based on experimentation) in the iSpeech-CNN architecture. Table A5 clearly shows that changing the filter parameter yields higher performance than with the number of filters (compare to the reference architecture results in Table A1 and Table A2 in the Appendix A). This improvement is observed with and without downsampled data and with respect to the subject (see Figure A1 and Figure 5). The standard deviation also decreases with these modifications (see Table A5).

4.2. Mixed Approach Results

This section discusses the results of the mixed approach. In this approach, data from all subjects are included in training, validation, and testing. Table A4 shows the results for the mixed approach with and without downsampling. These results were compiled with filtering and ICA in both reference and modified CNN architectures.
From these results, it is noted that the obtained accuracies are random in nature. The modified CNN architecture parameters do not help to obtain any improvements and show random accuracy behavior. In other words, it is difficult to achieve generalized performance with EEG signals. Based on the EEG literature, it has also been justified that models trained on data from one subject cannot be generalized to other subjects even though have been recorded using the same setup conditions.
Determining the optimal frequency sub-bands corresponding to each subject could be one possible direction that may be successful in such a scenario. We intent to explore such a direction in our future work.

4.3. Subject-Independent: Leave-One-Out Results

Having discussed the subject-specific and mixed results, in this section, the subject-independent results are discussed. The leave-one-out approach is a variation of the mixed approach; however, unlike the mixed approach, here, the data of the testing subject are not included in the training. For example, in Figure 6, except S u b j e c t 01 , all other subjects were used in the training of M o d e l -01. Figure 6 and Table A6 show the results using the iSpeech-CNN architecture, while Table A3 shows the results using the reference architecture.
It can be noted that having fewer subjects in training (one less as compared to the mixed approach), shows slightly better behavior than the mixed approach, where all subjects were included in the training. Moreover, changing the reference CNN parameters to our proposed iSpeech-CNN architecture also shows improved performance (see Figure 6 and Table A6).
The mixed and leave-one-out approaches both showed that generalizing the performance over all subjects is difficult in the EEG scenario. Hence, there is a need for the preprocessing stage, which can make the data more discriminative.

5. Results and Discussion: Words (Six Classes)

Having discussed all the approaches for the category of vowels, we noticed that only the subject-specific approach showed performance that was not random in nature and hence makes sense; therefore, in this section, we only report results corresponding to the subject-specific approach for the word category.
This category contains six different classes (see Section 2.1). Table A8 and Figure A2 and Figure 7 show the performance results for the classification of the six words, using the proposed iSpeech-CNN architecture. The performance results when using the reference architecture can be found in Appendix B. From these tables and figures, the same kind of behavior as vowels is observed. The change in the number of filters in the initial layers affected the performance as shown in Table A8. The downsampling of data also affects the overall performance. Figure 8 shows that the highest word recognition performance (29.12%) is observed at highest sampling rate (1024), i.e., without downsampling. For clarity, we added the bars for standard error for each data point. As opposed to vowel recognition, there is a steady increase in the performance when increasing the sampling rate (though again, not always significant among two neighboring values).
The iSpeech-CNN architecture shows better performance than handcrafted features such as real-time wavelet energy [8] and reference architecture (Appendix B).
Overall, we achieve a state-of-the-art performance of 29.21% when classifying the six words using our proposed iSpeech-CNN architecture and preprocessing methodology without downsampling.
The performance reported in this work is based on the CNN architecture of the reference network [7]. No other architecture was investigated. This is due to the reason that the goal of the proposed work is to reproduce the Cooney’s results and making the network and codes available to the research community.

6. Performance Comparison and Related Discussion

In this section, we compare our results on the vowels and words dataset with existing work and discuss related findings. Based on the reported performances in the Table 2, it is clearly noted that the CNN performs better than the handcrafted features for both datasets.
The precision, weighted F-score, and F-score for our proposed iSpeech-CNN in comparison with the reported results of Cooney et al. [7] are shown in Table 3. From this table, we can note that our proposed system results in a higher precision; however, a lower F-score compared to the model in [7]. Hence, the reproducibility of the results reported in [7] is difficult.
Our proposed CNN architecture and preprocessing methodology outperform the existing work in word and vowel category when following subject-dependent approach, as shown in Table 2; however, it is worth to mention that for the vowel classification, unlike in [7], we do not downsampling the data. Furthermore, [7] when using transfer learning approach for the vowel classification task, they report an overall accuracy of 35.68%, which is slightly higher than our reported accuracy in the subject-dependent approach.
Based on the 1-tail paired t-test results, we found that there is statistical significant difference between iSpeech-CNN and the reference paper [7] for word classification and for vowel classification, if we compare to the work without transfer learning (which is the fair comparison, as transfer learning adds a new dimension). We also found that there is no significant difference between the best reported results with transfer learning [7,24] and iSpeech-CNN. Furthermore, when we run the 1-tail paired t-test results for iSpeech-CNN between downsampling and without downsampling, we found that these difference are significantly different for the words task (p = 0.0005), but not statistically significant for the vowels task. We are following 1-tail paired t-test and used 10% of the overall samples, i.e., 332 for vowels and 403 for words.
Hence, it is observed that the correct selection of preprocessing methods and the number of filters in the CNN, greatly add to the performance. The elaborated results for each category and with each approach have been added to Appendix A and Appendix B.

7. Conclusions

This study explores the effectiveness of preprocessing steps and the correct selection of filters in the initial layers of the CNN in the context of both vowel and word classification. The classification results are reported on a publicly available inner speech dataset of five vowels and six words [8]. Based on the obtained accuracies, it is found that such a direction of exploration truly adds to the performance. We report state-of-the-art classification performance for vowels and words with mean accuracies of 35.20% and 29.21%, respectively, without downsampling the original data. Mean accuracies of 34.88% and 27.38% have been reported for vowels and words, respectively, with downsampling. Furthermore, the proposed CNN code in this study is available to the public to ensure reproducibility of the research results and to promote open research. Our proposed iSpeech-CNN architecture and preprocessing methodology are the same for both datasets (vowels and words).
Evaluating our system in other publicly available datasets is part of our future work. Furthermore, we will address the issues related to the selection of the downsampling rate and the selection of the optimal frequency sub-band with respect to subjects.

Author Contributions

Conceptualization, F.S.L.; methodology, F.S.L. and V.G.; software, F.S.L. and V.G.; validation, F.S.L., V.G., R.S. and K.D.; formal analysis, F.S.L., V.G., R.S. and K.D.; investigation, F.S.L., V.G., R.S. and K.D.; writing—original draft preparation, F.S.L., V.G., R.S., K.D. and M.L.; writing—review and editing, F.S.L., V.G., R.S., K.D. and M.L.; visualization, F.S.L., V.G., R.S. and K.D.; supervision, F.S.L. and M.L.; project administration, F.S.L. and M.L.; funding acquisition, F.S.L. and M.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received funding by the Grants for excellent research projects proposals of SRT.ai 2022.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The code used in this paper is publicly accessible on Github (https://github.com/LTU-Machine-Learning/Rethinking-Methods-Inner-Speech, accessed on 27 February 2022).

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Appendix A. The Results on Vowels

Table A1. Subject-dependent results for the vowels using raw and downsampled data (Reference Architecture).
Table A1. Subject-dependent results for the vowels using raw and downsampled data (Reference Architecture).
RawDownsampled
TrainValidationTestTrainValidationTest
S01 91.07 36.00 35.2979.0739.2026.47
S02 92.44 29.00 17.0072.0024.0016.00
S03 83.65 34.00 27.6986.9426.0019.23
S04 89.94 38.00 33.3389.3330.0031.67
S05 86.59 16.00 21.6072.8219.0017.60
S06 91.73 27.00 25.8580.0029.0027.80
S07 95.17 31.0026.8685.6633.0028.00
S0888.1229.0021.4383.0624.0021.43
S0983.8935.2030.0082.3830.4026.00
S1088.3529.0020.8063.8826.0019.20
S1180.9125.0022.6186.0623.0016.52
S1288.5729.0026.6796.2337.0030.83
S1380.0038.0029.0993.9436.0031.82
S1479.6627.2022.7673.4927.2023.45
S1583.2023.0020.0091.7328.0020.89
Average/Mean86.8829.7625.2982.4428.7923.79
Standard Deviation4.815.965.148.735.445.35
Table A2. Subject-dependent results for vowels with and without downsampling on preprocessed data (Reference Architecture).
Table A2. Subject-dependent results for vowels with and without downsampling on preprocessed data (Reference Architecture).
Preprocessing (Filtering and Artifact Removal)
No DownsamplingDownsampling
TrainValidationTestTrainValidationTest
S01 96.65 38.40 38.8291.4443.2038.82
S02 92.78 40.00 25.0099.3332.0027.00
S03 99.88 38.00 34.6299.4136.0032.31
S04 99.15 43.00 32.5097.4542.0030.83
S05 89.29 37.00 26.4093.4134.0035.20
S06 95.47 38.00 37.5698.8038.0032.20
S07 98.62 36.0027.4381.5228.0028.00
S0897.0645.0038.5798.5939.0032.14
S0997.4136.0035.3397.4132.0030.00
S1096.2435.0037.6098.4735.0028.80
S1188.6930.0031.3091.6635.0033.91
S1286.8635.0030.8391.3128.0024.17
S1399.5440.0033.6495.8946.0031.82
S1487.4339.2035.8684.1140.0033.79
S1591.8733.0022.2299.7330.0024.44
Average/Mean94.4637.5732.5194.5735.8830.9
Standard Deviation4.443.625.055.485.283.83
Table A3. Leave-one-out results for vowels with and without downsampling on preprocessed data (Reference Architecture). Subject in test set.
Table A3. Leave-one-out results for vowels with and without downsampling on preprocessed data (Reference Architecture). Subject in test set.
Preprocessing (Filtering and Artifact Removal)
No DownsamplingDownsampling
ValidationTestValidationTest
S01 71.43 21.1723.0818.61
S02 88.99 20.9140.4121.36
S03 86.00 25.9357.4819.91
S04 87.77 24.4055.2022.01
S05 69.33 19.5358.7224.65
S06 55.36 14.6960.8423.70
S0760.3026.0036.3630.50
S0883.9928.4441.5121.10
S0945.4526.2544.0524.58
S1073.7220.0041.9917.67
S1151.4525.2368.8824.31
S1291.9022.3755.3420.55
S1363.6320.2861.4722.12
S1478.6223.1451.3121.83
S1584.4225.5824.2822.33
Average/Mean72.8222.9348.0622.35
Standard Deviation14.843.5513.102.95
Table A4. Mixed-approach results for vowels with and without downsampling on preprocessed data (Filtering and Artifact Removal) (Reference Architecture; iSpeech-CNN Architecture).
Table A4. Mixed-approach results for vowels with and without downsampling on preprocessed data (Filtering and Artifact Removal) (Reference Architecture; iSpeech-CNN Architecture).
With Preprocessing; Reference Architecture Parameters
No DownsamplingDownsampling
TrainValidationTestTrainValidationTest
Trial 1 72.45 20.95 22.2762.6120.3217.63
Trial 2 76.73 22.22 19.0358.7520.6324.36
Trial 3 64.82 22.54 19.2650.5820.3220.65
Trial 4 67.55 18.10 19.9557.0418.4120.19
Trial 5 60.78 20.32 22.0447.7822.8623.90
Mean/Average68.4720.8320.5155.3520.5121.35
Standard Deviation5.611.591.385.431.422.50
With Preprocessing; iSpeech-CNNArchitecture Parameters
No DownsamplingDownsampling
TrainValidationTestTrainValidationTest
Trial 157.9820.6320.4255.1021.9018.33
Trial 268.7921.2719.7246.4623.1722.04
Trial 354.6321.5921.1144.4418.7322.27
Trial 437.7017.4620.4224.3621.2721.35
Trial 586.1524.1320.1957.2022.8620.42
Average/Mean61.0521.0220.3745.5121.5920.88
Standard Deviation16.042.140.4511.641.581.43
Table A5. Subject-dependent results for vowels with and without downsampling on preprocessed signals (iSpeech-CNN Architecture).
Table A5. Subject-dependent results for vowels with and without downsampling on preprocessed signals (iSpeech-CNN Architecture).
Preprocessing (Filtering and Artifact Removal)
No DownsamplingDownsampling
TrainValidationTestTrainValidationTest
S01 86.33 48.80 37.6581.8633.6037.65
S02 98.00 38.00 35.0084.3331.0030.00
S03 85.76 38.00 39.2398.0037.0036.15
S04 97.09 46.00 32.5097.4541.0035.83
S05 91.41 44.00 34.4094.5938.0036.00
S06 89.87 41.00 34.6395.3338.0031.71
S07 98.34 39.0044.5798.2139.0041.14
S0880.2441.0038.5796.8238.0038.57
S0996.6532.8029.3392.2235.2030.00
S1097.4144.0034.4087.5337.0040.80
S1195.4341.0032.1798.0631.0030.43
S1291.5440.0031.6783.4334.0035.00
S1382.7446.0036.3685.6046.0037.27
S1480.2343.2038.6278.0636.8033.79
S1589.6031.0028.8990.8038.0028.89
Average/Mean90.7140.9235.2090.8236.9134.88
Standard Deviation6.244.653.996.603.663.83
Table A6. Leave-one-out results for vowels with and without downsampling on preprocessed signals (iSpeech-CNN Architecture).
Table A6. Leave-one-out results for vowels with and without downsampling on preprocessed signals (iSpeech-CNN Architecture).
With Preprocessing (Filtering and Artifact Removal)
No DownsamplingDownsampling
ValidationTestValidationTest
S01 46.65 26.6438.2325.55
S02 83.11 27.2742.0226.36
S03 74.32 27.7843.1624.07
S04 22.27 32.5456.1619.62
S05 64.79 26.9856.2426.98
S06 77.13 22.7547.7324.64
S0747.2118.5040.7623.50
S0839.8623.8550.1630.28
S0982.3825.0046.3624.58
S1045.8218.1448.0221.86
S1152.4222.0254.2321.10
S1234.2620.0967.0323.74
S1346.0525.3550.0825.35
S1446.9420.0941.8522.27
S1551.6920.9346.3118.60
Average/Mean54.3323.8648.5623.90
Standard Deviation18.134.027.512.97
Figure A1. Subject-dependent results for vowels with downsampling on preprocessed signals (iSpeech-CNN Architecture). Chance accuracy 20%.
Figure A1. Subject-dependent results for vowels with downsampling on preprocessed signals (iSpeech-CNN Architecture). Chance accuracy 20%.
Neurosci 03 00017 g0a1

Appendix B. The Results on Words

Table A7. Subject-dependent results for words with and without downsampling on preprocessed data (Reference Architecture).
Table A7. Subject-dependent results for words with and without downsampling on preprocessed data (Reference Architecture).
Preprocessing (Filtering and Artifact Removal)
No DownsamplingWith Downsampling
TrainValidationTestTrainValidationTest
S0190.60 31.33 30.0083.1632.6728.50
S0296.57 30.83 23.7079.6230.0022.22
S0390.19 32.00 31.5867.3330.0029.47
S0492.75 36.67 25.1475.4933.3325.71
S0587.53 41.67 28.1369.8930.8331.25
S0683.12 39.17 25.6571.7229.1723.04
S0792.2226.6725.3379.3935.0021.33
S0885.3738.3330.5386.4834.1723.16
S0992.8632.5025.5682.5730.0028.89
S1089.0531.6729.4192.0030.0030.59
S1193.1431.3327.3795.2432.0022.11
S1288.0033.3325.6282.0034.6732.50
S1396.7628.3333.3386.1039.1725.33
S1486.1035.3328.6468.9536.0028.18
S1596.9827.5029.2391.7728.3327.69
Average/Mean90.7533.1127.9580.7832.3626.66
Standard Deviation4.274.362.768.482.913.53
Table A8. Subject-dependent results for words with and without downsampling on preprocessed signals (iSpeech-CNN Architecture).
Table A8. Subject-dependent results for words with and without downsampling on preprocessed signals (iSpeech-CNN Architecture).
Preprocessing (Filtering and Artifact Removal)
No DownsamplingDownsampling
TrainValidationTestTrainValidationTest
S0197.95 34.67 33.5077.3530.0025.50
S0292.86 34.17 32.5989.0528.3323.70
S0393.05 32.00 32.1176.6728.6734.74
S0494.51 35.00 26.8695.5929.1731.43
S0595.38 34.17 28.7582.4726.6731.25
S0696.99 38.33 26.0967.2030.8326.09
S0787.4733.3324.0080.7130.0016.00
S0898.3335.0032.1183.7030.0028.95
S0998.8637.5027.7870.0027.5028.89
S1092.3832.5032.3593.5234.1727.06
S1197.4336.6727.3787.9035.3326.32
S1292.8637.3326.2569.0532.0025.62
S1384.5731.6726.0068.2931.6733.33
S1496.3837.3332.7366.3830.6727.27
S1594.6933.3329.7486.6731.6724.62
Average/Mean94.2534.8729.2179.6430.4527.38
Standard Deviation4.002.143.129.512.254.37
Figure A2. Subject-dependent results for words with downsampling on preprocessed signals (iSpeech-CNN Architecture). Chance accuracy 16.66%.
Figure A2. Subject-dependent results for words with downsampling on preprocessed signals (iSpeech-CNN Architecture). Chance accuracy 16.66%.
Neurosci 03 00017 g0a2

Appendix C. Dataset Samples

Figure A3. Example of preprocessed signals for all electrodes (after ICA) for the vowel /a/ for Subject01 and Subject02.
Figure A3. Example of preprocessed signals for all electrodes (after ICA) for the vowel /a/ for Subject01 and Subject02.
Neurosci 03 00017 g0a3
Figure A4. Example of preprocessed signals (after ICA) for all vowels and all electrodes for Subject02.
Figure A4. Example of preprocessed signals (after ICA) for all vowels and all electrodes for Subject02.
Neurosci 03 00017 g0a4

References

  1. Alderson-Day, B.; Fernyhough, C. Inner speech: Development, cognitive functions, phenomenology, and neurobiology. Psychol. Bull. 2015, 141, 931. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Whitford, T.J.; Jack, B.N.; Pearson, D.; Griffiths, O.; Luque, D.; Harris, A.W.; Spencer, K.M.; Le Pelley, M.E. Neurophysiological evidence of efference copies to inner speech. Elife 2017, 6, e28197. [Google Scholar] [CrossRef] [PubMed]
  3. Smallwood, J.; Schooler, J.W. The science of mind wandering: Empirically navigating the stream of consciousness. Annu. Rev. Psychol. 2015, 66, 487–518. [Google Scholar] [CrossRef] [PubMed]
  4. Filik, R.; Barber, E. Inner speech during silent reading reflects the reader’s regional accent. PLoS ONE 2011, 6, e25782. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Langland-Hassan, P.; Vicente, A. Inner Speech: New Voices; Oxford University Press: New York, NY, USA, 2018. [Google Scholar]
  6. Zhao, S.; Rudzicz, F. Classifying phonological categories in imagined and articulated speech. In Proceedings of the 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), South Brisbane, QLD, Australia, 19–24 April 2015; pp. 992–996. [Google Scholar]
  7. Cooney, C.; Folli, R.; Coyle, D. Optimizing layers improves CNN generalization and transfer learning for imagined speech decoding from EEG. In Proceedings of the 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), Bari, Italy, 6–9 October 2019; pp. 1311–1316. [Google Scholar]
  8. Coretto, G.A.P.; Gareis, I.E.; Rufiner, H.L. Open access database of EEG signals recorded during imagined speech. In Proceedings of the 12th International Symposium on Medical Information Processing and Analysis, Tandil, Argentina, 5–7 December 2017; Volume 10160, p. 1016002. [Google Scholar]
  9. Herff, C.; Heger, D.; De Pesters, A.; Telaar, D.; Brunner, P.; Schalk, G.; Schultz, T. Brain-to-text: Decoding spoken phrases from phone representations in the brain. Front. Neurosci. 2015, 9, 217. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Martin, S.; Iturrate, I.; Millán, J.d.R.; Knight, R.T.; Pasley, B.N. Decoding inner speech using electrocorticography: Progress and challenges toward a speech prosthesis. Front. Neurosci. 2018, 12, 422. [Google Scholar] [CrossRef] [PubMed]
  11. Dash, D.; Wisler, A.; Ferrari, P.; Davenport, E.M.; Maldjian, J.; Wang, J. MEG sensor selection for neural speech decoding. IEEE Access 2020, 8, 182320–182337. [Google Scholar] [CrossRef] [PubMed]
  12. Dash, D.; Ferrari, P.; Wang, J. Decoding imagined and spoken phrases from non-invasive neural (MEG) signals. Front. Neurosci. 2020, 14, 290. [Google Scholar] [CrossRef] [PubMed]
  13. Yoo, S.S.; Fairneny, T.; Chen, N.K.; Choo, S.E.; Panych, L.P.; Park, H.; Lee, S.Y.; Jolesz, F.A. Brain–computer interface using fMRI: Spatial navigation by thoughts. Neuroreport 2004, 15, 1591–1595. [Google Scholar] [CrossRef] [PubMed]
  14. Kamavuako, E.N.; Sheikh, U.A.; Gilani, S.O.; Jamil, M.; Niazi, I.K. Classification of overt and covert speech for near-infrared spectroscopy-based brain computer interface. Sensors 2018, 18, 2989. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Rezazadeh Sereshkeh, A.; Yousefi, R.; Wong, A.T.; Rudzicz, F.; Chau, T. Development of a ternary hybrid fNIRS-EEG brain–computer interface based on imagined speech. Brain-Comput. Interfaces 2019, 6, 128–140. [Google Scholar] [CrossRef]
  16. Panachakel, J.T.; Ramakrishnan, A.G. Decoding covert speech from EEG-A comprehensive review. Front. Neurosci. 2021, 15, 642251. [Google Scholar] [CrossRef] [PubMed]
  17. Schirrmeister, R.T.; Springenberg, J.T.; Fiederer, L.D.J.; Glasstetter, M.; Eggensperger, K.; Tangermann, M.; Hutter, F.; Burgard, W.; Ball, T. Deep learning with convolutional neural networks for EEG decoding and visualization. Hum. Brain Mapp. 2017, 38, 5391–5420. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Angrick, M.; Ottenhoff, M.C.; Diener, L.; Ivucic, D.; Ivucic, G.; Goulis, S.; Saal, J.; Colon, A.J.; Wagner, L.; Krusienski, D.J.; et al. Real-time synthesis of imagined speech processes from minimally invasive recordings of neural activity. Commun. Biol. 2021, 4, 1055. [Google Scholar] [CrossRef] [PubMed]
  19. Dash, D.; Ferrari, P.; Berstis, K.; Wang, J. Imagined, Intended, and Spoken Speech Envelope Synthesis from Neuromagnetic Signals. In Proceedings of the International Conference on Speech and Computer, St. Petersburg, Russia, 27–30 September 2021; pp. 134–145. [Google Scholar]
  20. Lawhern, V.J.; Solon, A.J.; Waytowich, N.R.; Gordon, S.M.; Hung, C.P.; Lance, B.J. EEGNet: A compact convolutional neural network for EEG-based brain–computer interfaces. J. Neural Eng. 2018, 15, 056013. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. Nguyen, C.H.; Karavas, G.K.; Artemiadis, P. Inferring imagined speech using EEG signals: A new approach using Riemannian manifold features. J. Neural Eng. 2017, 15, 016002. [Google Scholar] [CrossRef] [PubMed]
  22. van den Berg, B.; van Donkelaar, S.; Alimardani, M. Inner Speech Classification using EEG Signals: A Deep Learning Approach. In Proceedings of the 2021 IEEE 2nd International Conference on Human-Machine Systems (ICHMS), Magdeburg, Germany, 8–10 September 2021; pp. 1–4. [Google Scholar]
  23. Nieto, N.; Peterson, V.; Rufiner, H.L.; Kamienkowski, J.E.; Spies, R. Thinking out loud, an open-access EEG-based BCI dataset for inner speech recognition. Sci. Data 2022, 9, 52. [Google Scholar] [CrossRef] [PubMed]
  24. Cooney, C.; Korik, A.; Folli, R.; Coyle, D. Evaluation of hyperparameter optimization in machine and deep learning methods for decoding imagined speech EEG. Sensors 2020, 20, 4629. [Google Scholar] [CrossRef] [PubMed]
  25. Ablin, P.; Cardoso, J.F.; Gramfort, A. Faster independent component analysis by preconditioning with Hessian approximations. IEEE Trans. Signal Process. 2018, 66, 4040–4049. [Google Scholar] [CrossRef] [Green Version]
  26. Cheng, J.; Zou, Q.; Zhao, Y. ECG signal classification based on deep CNN and BiLSTM. BMC Med. Inform. Decis. Mak. 2021, 21, 365. [Google Scholar] [CrossRef] [PubMed]
  27. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
Figure 1. Experimental protocol used in [8]: Ready interval followed by a textual representation of the stimulus (vowel or word). The inner speech production took place during the stimulus interval for 4 s.
Figure 1. Experimental protocol used in [8]: Ready interval followed by a textual representation of the stimulus (vowel or word). The inner speech production took place during the stimulus interval for 4 s.
Neurosci 03 00017 g001
Figure 2. The figure illustrates the proposed workflow. The preprocessed EEG signals with or without downsampling are used to train a CNN model for inner speech decoding.
Figure 2. The figure illustrates the proposed workflow. The preprocessed EEG signals with or without downsampling are used to train a CNN model for inner speech decoding.
Neurosci 03 00017 g002
Figure 3. Proposed iSpeech-CNN architecture for imagined speech recognition based on the architecture described in [7]. This network is trained separately for vowels and words. Therefore, the difference lies in the last layer (softmax). The softmax layer for vowels have five outputs while for words has six outputs.
Figure 3. Proposed iSpeech-CNN architecture for imagined speech recognition based on the architecture described in [7]. This network is trained separately for vowels and words. Therefore, the difference lies in the last layer (softmax). The softmax layer for vowels have five outputs while for words has six outputs.
Neurosci 03 00017 g003
Figure 4. The impact of different sampling rates on vowel recognition performance of (iSpeech-CNN Architecture) with different filters in first three CNN layers. The bars indicate the standard error, sample size = 5. Theoretical chance accuracy = 20% (red-dotted line).
Figure 4. The impact of different sampling rates on vowel recognition performance of (iSpeech-CNN Architecture) with different filters in first three CNN layers. The bars indicate the standard error, sample size = 5. Theoretical chance accuracy = 20% (red-dotted line).
Neurosci 03 00017 g004
Figure 5. Subject dependent results for vowels without downsampling on preprocessed signals (iSpeech-CNN Architecture). Theoretical chance accuracy = 20% (red-dotted line).
Figure 5. Subject dependent results for vowels without downsampling on preprocessed signals (iSpeech-CNN Architecture). Theoretical chance accuracy = 20% (red-dotted line).
Neurosci 03 00017 g005
Figure 6. Leave-one-out results for vowels with and without downsampling on preprocessed signals (iSpeech-CNN Architecture). Theoretical chance accuracy = 20% (red-dotted line).
Figure 6. Leave-one-out results for vowels with and without downsampling on preprocessed signals (iSpeech-CNN Architecture). Theoretical chance accuracy = 20% (red-dotted line).
Neurosci 03 00017 g006
Figure 7. Subject-dependent results for words without downsampling on preprocessed signals (iSpeech-CNN Architecture). Theoretical chance accuracy = 16.66% (red-dotted line).
Figure 7. Subject-dependent results for words without downsampling on preprocessed signals (iSpeech-CNN Architecture). Theoretical chance accuracy = 16.66% (red-dotted line).
Neurosci 03 00017 g007
Figure 8. The impact of different sampling rates on word recognition performance of (iSpeech-CNN Architecture) with different filters in first three CNN layers. Performance increases with higher sampling rates. The bars indicate the standard error, sample size = 5. Theoretical chance accuracy = 16.66% (red-dotted line).
Figure 8. The impact of different sampling rates on word recognition performance of (iSpeech-CNN Architecture) with different filters in first three CNN layers. Performance increases with higher sampling rates. The bars indicate the standard error, sample size = 5. Theoretical chance accuracy = 16.66% (red-dotted line).
Neurosci 03 00017 g008
Table 1. Overview of inner speech studies (2015–2021). TL: Transfer learning.
Table 1. Overview of inner speech studies (2015–2021). TL: Transfer learning.
StudyTechnologyNumber of SubjectsNumber of ClassesClassifierResultsSubject-
Independent
2015—[6]EEG, facial62 phonemesDBN90%yes
2017—[8]EEG155 vowelsRF22.32%no
2017—[8]EEG156 wordsRF18.58%no
2017—[21]EEG63 wordsMRVM 50.1 % ± 3.5 % no
2017—[21]EEG62 wordsMRVM 66.2 % ± 4.8 % no
2018—[10]ECoG52 (6) wordsSVM58%no
2019—[7]EEG155 vowelsCNN35.68 (with TL), 32.75%yes
2020—[24]EEG156 wordsCNN24.90%no
2020—[24]EEG156 wordsCNN24.46%yes
2019—[15]fNIRS, EEG112 wordsRLDA70.45% ± 19.19%no
2020—[12]MEG85 phrasesCNN93%no
2021—[22]EEG84 wordsCNN29.7%no
Table 2. Average subject-dependent classification results on the [8] dataset.
Table 2. Average subject-dependent classification results on the [8] dataset.
StudyClassifierVowelsWords
2017—[8]RF22.32% ± 1.81%18.58% ± 1.47%
2019, 2020—[7,24]CNN32.75% ± 3.23%24.90% ± 0.93%
iSpeech-GRUGRU19.28% ± 2.15%17.28% ± 1.45%
iSpeech-CNN (proposed)CNN35.20% ± 3.99%29.21% ± 3.12%
Table 3. Precision and F-score (with respect to Table A5, Table A6, Table A7 and Table A8) for vowel and word classification (iSpeech-CNN Architecture).
Table 3. Precision and F-score (with respect to Table A5, Table A6, Table A7 and Table A8) for vowel and word classification (iSpeech-CNN Architecture).
Vowel (iSpeech-CNN)
PrecisionWeighted F-ScoreF-Score
No Downsampling34.8541.1228.45
Downsampling34.6238.9930.02
Cooney et al. [7] (Downsampling)33.00-33.17
Words (iSpeech-CNN)
PrecisionWeighted F-ScoreF-Score
No Downsampling29.0436.1821.84
Downsampling26.8431.9421.50
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Simistira Liwicki, F.; Gupta, V.; Saini, R.; De, K.; Liwicki, M. Rethinking the Methods and Algorithms for Inner Speech Decoding and Making Them Reproducible. NeuroSci 2022, 3, 226-244. https://doi.org/10.3390/neurosci3020017

AMA Style

Simistira Liwicki F, Gupta V, Saini R, De K, Liwicki M. Rethinking the Methods and Algorithms for Inner Speech Decoding and Making Them Reproducible. NeuroSci. 2022; 3(2):226-244. https://doi.org/10.3390/neurosci3020017

Chicago/Turabian Style

Simistira Liwicki, Foteini, Vibha Gupta, Rajkumar Saini, Kanjar De, and Marcus Liwicki. 2022. "Rethinking the Methods and Algorithms for Inner Speech Decoding and Making Them Reproducible" NeuroSci 3, no. 2: 226-244. https://doi.org/10.3390/neurosci3020017

Article Metrics

Back to TopTop