Scalable Univariate and Multivariate Time-Series Classifiers with Deep Learning Methods Exploiting Symbolic Representations
Abstract
1. Introduction
- RQ1. How would a classifier built on top of graph representations of SFA words perform in terms of accuracy and execution time? Our first attempt to answer this question is through the use of graph kernels, which are more scalable than Graph Neural Networks, with an SVM classifier on top.
- RQ2. Closely linked with RQ1, we take the graph representation with SFA words and combine it with Graph Convolutional Neural Networks to see how the resulting classifier would fare against the state of the art.
- RQ3. For this research question, we aim to answer whether SFA, together with state-of-the-art deep learning methods adapted from the text classification domain, can also provide state-of-the-art accuracy and execution times for multivariate time-series classification.
- RQ4. Finally, we aim to answer whether SCALE-BOSS-MR [15], a state-of-the-art symbolic time-series classifier, can be adapted to the multivariate use case and whether it provides state-of-the-art accuracy and execution time.
- 1.
- We use a graph representation together with a symbolic representation to represent time-series as a graph. First, we use this representation together with graph kernels and an SVM classifier to create SCALE-BOSS-GRAPH.
- 2.
- We use the graph representation of time-series in conjunction with a Graph Convolutional Neural Network to see whether it can attain state-of-the-art accuracy and execution time.
- 3.
- We adapt state-of-the-art neural network architectures from the text classification domain, and we use them in conjunction with symbolic representation to create state-of-the-art deep learning symbolic time-series classifiers. We also adapt the proposed deep learning methods to the multivariate use case and compare them against state-of-the-art deep learning methods.
- 4.
- We adapt SCALE-BOSS-MR to the multivariate use case, and we compared the adapted version SCALE-BOSS-MR-MV to state-of-the-art time-series classifiers.
2. Related Work
2.1. Methods Using Symbolic Representations
2.2. Methods Using Convolutional Kernels
2.3. Methods Using Deep Learning
2.4. Multivariate Time Series Classification
3. Graph Kernel Preliminaries
Color Refinement
4. Deep Learning Preliminaries
4.1. Convolutional Neural Networks
4.2. Recurrent Neural Networks
4.3. Long Short Term Memory
4.4. Attention
- 1.
- Alignment scores: The alignment scores e indicate how well the elements of the input sequence align with the current output. The alignment scores are computed by , where W is the weight matrix of the attention layer, b is the bias of the attention layer and x is the input of the attention layer.
- 2.
- Weights: The weights a are computed by passing a softmax to e. That is,
- 3.
- Context Vector: The context vector c is computed by the dot product of x and a. That is,
5. Proposed Methods
5.1. SCALE-BOSS-GRAPH
- 1.
- In the first step the training set is converted to its symbolic representation. In our instantiation we have chosen SFA but the choice of the symbolic representation is orthogonal to the framework.
- 2.
- Next, we create the graph-based symbolic representation for the training set.
- 3.
- In the third step, we create the Kernel for the training set. For this instantiation, we have chosen the WL graph kernels [45] with Vertex Histogram kernel [53] as the base kernel. The choice of the kernel is orthogonal to the method. We have chosen WL with Vertex Histogram as a base kernel for scalability reasons. More concretely WL proved to be amongst the fastest graph kernels with very good accuracy results. We experimented with other graph kernels, but most were orders of magnitude slower than WL with no gains in terms of accuracy. In addition to this, as already pointed out, the WL kernel is a generalization of nodes’ degrees since these provide one-hop neighborhood information, whereas WL kernel generalized this to K-hop information.
- 4.
- Then, we convert the test set to its symbolic representation, and the Graph-based symbolic representation.
- 5.
- Then, we create the kernel representation for the test set.
- 6.
- The SVC classifier is trained on the precomputed kernel for the training set.
5.2. Proposed Networks for the Univariate Case
- 1.
- First, the raw time-series is converted to the SFA representation.
- 2.
- Then each of the resulting SFA words is converted to integers according to the following scheme.
- (a)
- The most common SFA word is assigned the integer 2.
- (b)
- The second most common SFA word is assigned the integer 3 and so on for all the next in frequency SFA words.
- (c)
- The integer 0 is used for padding and the integer 1 is reserved for the Out-of-Vocabulary token.
5.3. Merging Strategies for the Multivariate Use Case
5.4. Proposed Multivariate Neural Networks
5.5. The SCALE-BOSS-MR-MV (SBMR-MV) Algorithm
| Algorithm 1: SCALE-BOSS-MR-MV (SBMR-MV) algorithm. |
![]() |
6. Evaluation
6.1. Experimental Setting
- 1.
- The performance, in terms of accuracy and execution time for SCALE-BOSS-GRAPH for the univariate use case.
- 2.
- The performance, in terms of accuracy and execution time of the proposed SFA-enhanced neural network classifiers in the univariate use case.
- 3.
- The performance, in terms of accuracy and execution time of the proposed SFA-enhanced neural network classifiers in the multivariate use case.
- 4.
- The performance, in terms of accuracy and execution time of the adaptation of SCALE-BOSS-MR (SCALE-BOSS-MR-MV) to the multivariate use case.
6.2. SCALE-BOSS Results
6.3. State-of-the-Art Algorithm Results
6.4. SCALE-BOSS-GRAPH Results
6.5. Evaluation of the Deep Neural Network Architectures
6.6. Multivariate Evaluation
7. Conclusions
- 1.
- The SCALE-BOSS-GRAPH algorithm proved very scalable, but its accuracy is lower than the state of the art.
- 2.
- The GCNs were on par in terms of accuracy with the state-of-the-art deep learning methods, but were not that scalable compared to other methods.
- 3.
- The proposed deep learning methods inspired by the text classification domain proved to be very effective both in terms of accuracy as well as in terms of execution time when compared with state-of-the-art deep learning methods both in the univariate and multivariate use cases. The optimizations we employed were instrumental for this result; for example, batch normalization gave a 5 % improvement in accuracy over the baseline, whereas adding an attention layer to the model gave a 10 % increase in accuracy but with a non-negligible cost in execution time. Adding convolutional layers to the architecture did not contribute much in accuracy but proved very beneficial in terms of execution time.
- 4.
- The adaptation of SCALE-BOSS-MR to the multivariate use case namely SCALE-BOSS-MR-MV proved very scalable and accurate compared to the state of the art. More specifically, SCALE-BOSS-MR-MV proved to be more scalable than ROCKET while being marginally less accurate.
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Appendix A
| Dataset | Crop | FordA | FordB | HandOutlines | NonInvasiveFetalECGThorax1 | NonInvasiveFetalECGThorax2 | PhalangesOutlinesCorrect | TwoPatterns |
|---|---|---|---|---|---|---|---|---|
| Algorithm | ||||||||
| SVC-2_1.0 | 0.605 | 0.813 | 0.598 | 0.700 | 0.534 | 0.699 | 0.704 | 0.786 |
| SVC-2_3.5 | 0.612 | 0.811 | 0.620 | 0.695 | 0.588 | 0.732 | 0.726 | 0.794 |
| SVCcv-2_2.0_3.0_3.5 | 0.612 | 0.811 | 0.617 | 0.711 | 0.588 | 0.735 | 0.723 | 0.793 |
| Dataset | Crop | FordA | FordB | HandOutlines | NonInvasiveFetalECGThorax1 | NonInvasiveFetalECGThorax2 | PhalangesOutlinesCorrect | TwoPatterns |
|---|---|---|---|---|---|---|---|---|
| Algorithm | ||||||||
| SVC-2_1.0 | 29.307 | 26.350 | 22.957 | 29.297 | 26.089 | 24.783 | 2.726 | 7.243 |
| SVC-2_3.5 | 29.857 | 26.383 | 23.948 | 27.087 | 24.552 | 24.667 | 2.413 | 6.966 |
| SVCcv-2_2.0_3.0_3.5 | 46.939 | 29.536 | 27.208 | 28.834 | 27.345 | 26.861 | 2.990 | 7.879 |
| Dataset | Crop | FordA | FordB | HandOutlines | NonInvasiveFatalECGThorax1 | NonInvasiveFatalECGThorax2 | PhalangesOutlinesCorrect | TwoPatterns |
|---|---|---|---|---|---|---|---|---|
| Algorithm | ||||||||
| CNNClassifier-ep20 | 0.049 | 0.510 | 0.509 | 0.859 | 0.018 | 0.018 | 0.613 | 0.344 |
| CNNClassifier-ep200 | 0.599 | 0.905 | 0.759 | 0.884 | 0.790 | 0.856 | 0.650 | 0.909 |
| EncoderClassifier-ep20 | 0.664 | 0.942 | 0.801 | 0.900 | 0.831 | 0.848 | 0.657 | 0.847 |
| FCNClassifier-ep20 | 0.505 | 0.896 | 0.752 | 0.641 | 0.030 | 0.042 | 0.667 | 0.838 |
| IInception-ep20 | 0.696 | 0.927 | 0.704 | 0.359 | 0.257 | 0.500 | 0.696 | 1.000 |
| IndividualLITEClassifier-ep100 | 0.720 | 0.948 | 0.809 | 0.781 | 0.589 | 0.883 | 0.828 | 1.000 |
| IndividualLITEClassifier-ep20 | 0.649 | 0.906 | 0.833 | 0.665 | 0.304 | 0.250 | 0.775 | 0.690 |
| IndividualLITEClassifier-ep50 | 0.706 | 0.937 | 0.775 | 0.822 | 0.537 | 0.683 | 0.700 | 1.000 |
| MLPClassifier-ep20 | 0.067 | 0.568 | 0.506 | 0.722 | 0.032 | 0.046 | 0.613 | 0.270 |
| MLPClassifier-ep200 | 0.270 | 0.698 | 0.598 | 0.854 | 0.205 | 0.219 | 0.614 | 0.484 |
| ResNet-ep20 | 0.652 | 0.928 | 0.716 | 0.668 | 0.371 | 0.700 | 0.611 | 0.999 |
| Dataset | Crop | FordA | FordB | HandOutlines | NonInvasiveFatalECGThorax1 | NonInvasiveFatalECGThorax2 | PhalangesOutlinesCorrect | TwoPatterns |
|---|---|---|---|---|---|---|---|---|
| Algorithm | ||||||||
| CNNClassifier-ep20 | 9.292 | 8.740 | 8.856 | 8.097 | 6.642 | 6.664 | 2.929 | 2.835 |
| CNNClassifier-ep200 | 62.032 | 81.318 | 80.168 | 88.313 | 66.284 | 61.911 | 20.709 | 18.079 |
| EncoderClassifier-ep20 | 543.388 | 1464.732 | 1477.982 | 2208.398 | 1173.432 | 1165.267 | 186.202 | 147.210 |
| FCNClassifier-ep20 | 107.983 | 547.794 | 546.054 | 908.470 | 473.204 | 410.372 | 48.388 | 45.022 |
| IInception-ep20 | 405.621 | 1622.802 | 1674.065 | 2536.508 | 1170.444 | 1210.527 | 177.632 | 144.673 |
| IndividualLITEClassifier-ep100 | 277.578 | 836.207 | 888.090 | 1347.996 | 641.307 | 631.699 | 91.441 | 80.029 |
| IndividualLITEClassifier-ep20 | 64.009 | 186.180 | 187.645 | 305.871 | 145.600 | 146.959 | 23.528 | 21.885 |
| IndividualLITEClassifier-ep50 | 148.385 | 482.603 | 495.227 | 725.426 | 338.380 | 348.790 | 49.652 | 42.970 |
| MLPClassifier-ep20 | 15.172 | 9.065 | 8.830 | 7.562 | 6.117 | 6.087 | 4.518 | 3.489 |
| MLPClassifier-ep200 | 139.296 | 83.352 | 83.041 | 64.771 | 51.888 | 50.578 | 36.980 | 23.391 |
| ResNet-ep20 | 316.447 | 1341.227 | 1398.291 | 1800.079 | 979.298 | 953.220 | 146.271 | 116.175 |
| Dataset | Crop | FordA | FordB | HandOutlines | NonInvasiveFetalECGThorax1 | NonInvasiveFetalECGThorax2 | PhalangesOutlinesCorrect | TwoPatterns |
|---|---|---|---|---|---|---|---|---|
| Algorithm | ||||||||
| LSTM-CNN-ep20 | 0.602 | 0.836 | 0.720 | 0.603 | 0.544 | 0.688 | 0.742 | 0.940 |
| LSTM-CNN-ep50 | 0.614 | 0.880 | 0.709 | 0.643 | 0.523 | 0.700 | 0.726 | 0.950 |
| LSTM-CNN-no-bn-ep20 | 0.612 | 0.900 | 0.683 | 0.646 | 0.343 | 0.450 | 0.721 | 0.969 |
| LSTM-CNN-no-bn-ep50 | 0.625 | 0.855 | 0.662 | 0.659 | 0.377 | 0.538 | 0.755 | 0.967 |
| LSTM-attention-cnn-ep20 | 0.614 | 0.908 | 0.731 | 0.854 | 0.720 | 0.819 | 0.753 | 0.935 |
| LSTM-attention-cnn-ep50 | 0.615 | 0.902 | 0.735 | 0.881 | 0.751 | 0.845 | 0.719 | 0.935 |
| LSTM-attention-cnn2-ep20 | 0.594 | 0.911 | 0.742 | 0.889 | 0.722 | 0.820 | 0.758 | 0.929 |
| LSTM-attention-cnn2-ep50 | 0.599 | 0.907 | 0.744 | 0.903 | 0.785 | 0.838 | 0.732 | 0.933 |
| LSTM-attention-ep20 | 0.614 | 0.903 | 0.740 | 0.881 | 0.692 | 0.777 | 0.718 | 0.941 |
| LSTM-attention-ep50 | 0.614 | 0.909 | 0.737 | 0.878 | 0.768 | 0.832 | 0.712 | 0.930 |
| gcn-ep20 | 0.525 | 0.912 | 0.730 | 0.676 | 0.582 | 0.606 | 0.642 | 0.772 |
| Dataset | Crop | FordA | FordB | HandOutlines | NonInvasiveFetalECGThorax1 | NonInvasiveFetalECGThorax2 | PhalangesOutlinesCorrect | TwoPatterns |
|---|---|---|---|---|---|---|---|---|
| Algorithm | ||||||||
| LSTM-CNN-ep20 | 29.245 | 106.697 | 145.504 | 321.038 | 158.140 | 156.379 | 12.654 | 21.750 |
| LSTM-CNN-ep50 | 33.414 | 130.198 | 139.316 | 277.145 | 236.521 | 260.698 | 14.464 | 19.466 |
| LSTM-CNN-no-bn-ep20 | 25.209 | 148.240 | 157.016 | 254.593 | 130.238 | 131.012 | 14.881 | 18.473 |
| LSTM-CNN-no-bn-ep50 | 45.386 | 200.621 | 108.940 | 463.126 | 292.452 | 295.635 | 20.117 | 25.116 |
| LSTM-attention-cnn-ep20 | 39.236 | 177.046 | 204.562 | 479.402 | 231.187 | 227.560 | 20.807 | 25.905 |
| LSTM-attention-cnn-ep50 | 46.482 | 204.554 | 164.150 | 1176.196 | 339.760 | 447.863 | 18.086 | 23.698 |
| LSTM-attention-cnn2-ep20 | 26.532 | 89.626 | 101.880 | 246.840 | 127.765 | 130.393 | 13.175 | 17.661 |
| LSTM-attention-cnn2-ep50 | 28.055 | 104.636 | 102.962 | 456.134 | 250.488 | 280.236 | 14.411 | 19.087 |
| LSTM-attention-ep20 | 67.502 | 554.118 | 364.036 | 1097.373 | 490.547 | 484.485 | 35.522 | 42.412 |
| LSTM-attention-ep50 | 100.004 | 409.937 | 497.081 | 1666.853 | 1145.013 | 970.118 | 38.963 | 55.774 |
| gcn-ep20 | 126.940 | 434.270 | 410.670 | 2714.180 | 437.547 | 425.622 | 24.379 | 38.267 |
| Dataset | ArticularyWordRecognition | EthanolConcentration | FingerMovements | SelfRegulationSCP1 | UWaveGestureLibrary |
|---|---|---|---|---|---|
| Algorithm | |||||
| SBMR-MV-RF-W0-noTrend-D0-UG-BAG | 0.807 | 0.350 | 0.500 | 0.727 | 0.472 |
| SBMR-MV-RF-W0-noTrend-D0-UG-STACK | 0.953 | 0.399 | 0.500 | 0.734 | 0.666 |
| SBMR-MV-RF-W0-trend-D0-UG-BAG | 0.873 | 0.357 | 0.620 | 0.737 | 0.569 |
| SBMR-MV-RF-W0-trend-D0-UG-STACK | 0.963 | 0.285 | 0.570 | 0.765 | 0.700 |
| SBMR-MV-RF-W0-trend-D2-BG-BAG | 0.833 | 0.357 | 0.510 | 0.792 | 0.744 |
| SBMR-MV-RF-W0-trend-D2-BG-STACK | 0.957 | 0.331 | 0.480 | 0.819 | 0.831 |
| SBMR-MV-RF-W0-trend-D4-BG-BAG | 0.870 | 0.327 | 0.550 | 0.751 | 0.706 |
| SBMR-MV-RF-W0-trend-D4-BG-STACK | 0.963 | 0.354 | 0.550 | 0.751 | 0.838 |
| SBMR-MV-RF-W0-trend-D6-UG-BAG | 0.870 | 0.331 | 0.500 | 0.758 | 0.725 |
| SBMR-MV-RF-W0-trend-D6-UG-STACK | 0.950 | 0.331 | 0.440 | 0.788 | 0.828 |
| SBMR-MV-ridge-cv-W0-noTrend-D0-UG-BAG | 0.823 | 0.274 | 0.470 | 0.713 | 0.463 |
| SBMR-MV-ridge-cv-W0-noTrend-D0-UG-STACK | 0.990 | 0.316 | 0.480 | 0.706 | 0.653 |
| SBMR-MV-ridge-cv-W0-trend-D0-UG-BAG | 0.873 | 0.308 | 0.490 | 0.713 | 0.497 |
| SBMR-MV-ridge-cv-W0-trend-D0-UG-STACK | 0.983 | 0.323 | 0.480 | 0.734 | 0.697 |
| SBMR-MV-ridge-cv-W0-trend-D2-BG-BAG | 0.983 | 0.312 | 0.520 | 0.826 | 0.769 |
| SBMR-MV-ridge-cv-W0-trend-D2-BG-STACK | 0.987 | 0.274 | 0.500 | 0.850 | 0.891 |
| SBMR-MV-ridge-cv-W0-trend-D4-BG-BAG | 0.963 | 0.308 | 0.500 | 0.737 | 0.722 |
| SBMR-MV-ridge-cv-W0-trend-D4-BG-STACK | 0.993 | 0.312 | 0.500 | 0.761 | 0.856 |
| SBMR-MV-ridge-cv-W0-trend-D6-UG-BAG | 0.937 | 0.281 | 0.500 | 0.761 | 0.753 |
| Dataset | ArticularyWordRecognition | EthanolConcentration | FingerMovements | SelfRegulationSCP1 | UWaveGestureLibrary |
|---|---|---|---|---|---|
| Algorithm | |||||
| SBMR-MV-RF-W0-noTrend-D0-UG-BAG | 3.105 | 14.333 | 2.333 | 12.768 | 1.810 |
| SBMR-MV-RF-W0-noTrend-D0-UG-STACK | 3.242 | 14.439 | 2.543 | 12.773 | 1.939 |
| SBMR-MV-RF-W0-trend-D0-UG-BAG | 5.992 | 26.609 | 4.489 | 25.454 | 3.495 |
| SBMR-MV-RF-W0-trend-D0-UG-STACK | 6.176 | 27.464 | 4.869 | 25.030 | 3.543 |
| SBMR-MV-RF-W0-trend-D2-BG-BAG | 27.984 | 116.769 | 21.084 | 119.072 | 16.017 |
| SBMR-MV-RF-W0-trend-D2-BG-STACK | 29.462 | 113.139 | 25.120 | 113.507 | 16.038 |
| SBMR-MV-RF-W0-trend-D4-BG-BAG | 13.639 | 58.592 | 10.274 | 57.004 | 7.791 |
| SBMR-MV-RF-W0-trend-D4-BG-STACK | 14.583 | 57.821 | 12.483 | 56.129 | 7.905 |
| SBMR-MV-RF-W0-trend-D6-UG-BAG | 12.494 | 54.386 | 9.202 | 52.044 | 7.275 |
| SBMR-MV-RF-W0-trend-D6-UG-STACK | 12.421 | 51.986 | 9.732 | 50.525 | 7.008 |
| SBMR-MV-ridge-cv-W0-noTrend-D0-UG-BAG | 3.029 | 14.389 | 2.269 | 12.406 | 1.756 |
| SBMR-MV-ridge-cv-W0-noTrend-D0-UG-STACK | 3.084 | 14.438 | 2.410 | 12.546 | 1.756 |
| SBMR-MV-ridge-cv-W0-trend-D0-UG-BAG | 5.856 | 26.680 | 4.375 | 24.617 | 3.384 |
| SBMR-MV-ridge-cv-W0-trend-D0-UG-STACK | 6.158 | 26.524 | 4.742 | 25.588 | 3.454 |
| SBMR-MV-ridge-cv-W0-trend-D2-BG-BAG | 27.256 | 111.711 | 20.511 | 113.173 | 15.684 |
| SBMR-MV-ridge-cv-W0-trend-D2-BG-STACK | 29.062 | 112.905 | 24.957 | 112.336 | 15.853 |
| SBMR-MV-ridge-cv-W0-trend-D4-BG-BAG | 13.406 | 57.022 | 10.161 | 55.374 | 7.650 |
| SBMR-MV-ridge-cv-W0-trend-D4-BG-STACK | 14.341 | 56.462 | 12.269 | 56.234 | 7.807 |
| SBMR-MV-ridge-cv-W0-trend-D6-UG-BAG | 11.783 | 51.678 | 8.750 | 50.138 | 6.808 |
| Dataset | ArticularyWordRecognition | EthanolConcentration | FingerMovements | SelfRegulationSCP1 | UWaveGestureLibrary |
|---|---|---|---|---|---|
| Algorithm | |||||
| SBMR-MV-RF-W11-trend-D6-BG-BAG | 0.897 | 0.350 | 0.570 | 0.788 | 0.831 |
| SBMR-MV-RF-W11-trend-D6-BG-STACK | 0.973 | 0.338 | 0.410 | 0.751 | 0.878 |
| SBMR-MV-RF-W15-trend-D6-BG-BAG | 0.793 | 0.327 | 0.550 | 0.771 | 0.812 |
| SBMR-MV-RF-W15-trend-D6-BG-STACK | 0.953 | 0.361 | 0.560 | 0.765 | 0.853 |
| SBMR-MV-ridge-cv-W11-trend-D4-BG-BAG-chi | 0.997 | 0.350 | 0.550 | 0.788 | 0.859 |
| SBMR-MV-ridge-cv-W11-trend-D4-BG-STACK-chi | 0.997 | 0.335 | 0.520 | 0.819 | 0.916 |
| SBMR-MV-ridge-cv-W11-trend-D6-BG-BAG | 0.993 | 0.316 | 0.510 | 0.857 | 0.863 |
| SBMR-MV-ridge-cv-W11-trend-D6-BG-BAG-chi | 0.993 | 0.342 | 0.570 | 0.823 | 0.863 |
| SBMR-MV-ridge-cv-W11-trend-D6-BG-STACK | 0.997 | 0.285 | 0.530 | 0.857 | 0.916 |
| SBMR-MV-ridge-cv-W11-trend-D6-BG-STACK-chi | 0.997 | 0.323 | 0.500 | 0.874 | 0.916 |
| SBMR-MV-ridge-cv-W14-trend-D6-BG-STACK | 0.993 | 0.297 | 0.490 | 0.833 | 0.903 |
| SBMR-MV-ridge-cv-W15-trend-D2-BG-BAG-chi | 0.987 | 0.373 | 0.610 | 0.846 | 0.850 |
| SBMR-MV-ridge-cv-W15-trend-D2-BG-STACK-chi | 0.990 | 0.342 | 0.560 | 0.850 | 0.909 |
| SBMR-MV-ridge-cv-W15-trend-D4-BG-BAG-chi | 0.993 | 0.304 | 0.540 | 0.775 | 0.884 |
| SBMR-MV-ridge-cv-W15-trend-D4-BG-STACK-chi | 0.993 | 0.327 | 0.520 | 0.826 | 0.906 |
| SBMR-MV-ridge-cv-W15-trend-D6-BG-BAG | 0.990 | 0.308 | 0.530 | 0.860 | 0.878 |
| SBMR-MV-ridge-cv-W15-trend-D6-BG-BAG-chi | 0.990 | 0.312 | 0.590 | 0.857 | 0.878 |
| SBMR-MV-ridge-cv-W15-trend-D6-BG-STACK | 0.990 | 0.281 | 0.520 | 0.843 | 0.916 |
| SBMR-MV-ridge-cv-W15-trend-D6-BG-STACK-chi | 0.990 | 0.300 | 0.590 | 0.850 | 0.916 |
| SBMR-MV-ridge-cv-W8-trend-D6-BG-BAG | 0.997 | 0.285 | 0.520 | 0.846 | 0.828 |
| SBMR-MV-ridge-cv-W8-trend-D6-BG-STACK | 0.997 | 0.289 | 0.530 | 0.853 | 0.894 |
| Dataset | ArticularyWordRecognition | EthanolConcentration | FingerMovements | SelfRegulationSCP1 | UWaveGestureLibrary |
|---|---|---|---|---|---|
| Algorithm | |||||
| SBMR-MV-RF-W11-trend-D6-BG-BAG | 31.708 | 110.903 | 25.473 | 112.669 | 16.795 |
| SBMR-MV-RF-W11-trend-D6-BG-STACK | 41.358 | 115.328 | 42.269 | 119.476 | 18.377 |
| SBMR-MV-RF-W15-trend-D6-BG-BAG | 18.695 | 58.289 | 16.674 | 58.350 | 9.610 |
| SBMR-MV-RF-W15-trend-D6-BG-STACK | 25.418 | 61.010 | 28.893 | 61.293 | 10.652 |
| SBMR-MV-ridge-cv-W11-trend-D4-BG-BAG-chi | 31.945 | 111.858 | 23.221 | 114.698 | 17.014 |
| SBMR-MV-ridge-cv-W11-trend-D4-BG-STACK-chi | 40.846 | 112.995 | 32.580 | 117.638 | 18.243 |
| SBMR-MV-ridge-cv-W11-trend-D6-BG-BAG | 38.256 | 145.062 | 31.322 | 143.597 | 20.178 |
| SBMR-MV-ridge-cv-W11-trend-D6-BG-BAG-chi | 32.462 | 112.871 | 23.014 | 112.695 | 17.219 |
| SBMR-MV-ridge-cv-W11-trend-D6-BG-STACK | 40.630 | 115.386 | 44.945 | 120.181 | 18.654 |
| SBMR-MV-ridge-cv-W11-trend-D6-BG-STACK-chi | 41.561 | 113.599 | 32.276 | 115.983 | 18.670 |
| SBMR-MV-ridge-cv-W14-trend-D6-BG-STACK | 49.472 | 160.643 | 47.373 | 167.632 | 24.661 |
| SBMR-MV-ridge-cv-W15-trend-D2-BG-BAG-chi | 38.014 | 112.522 | 27.867 | 113.787 | 19.632 |
| SBMR-MV-ridge-cv-W15-trend-D2-BG-STACK-chi | 55.559 | 116.257 | 42.339 | 120.743 | 22.030 |
| SBMR-MV-ridge-cv-W15-trend-D4-BG-BAG-chi | 18.775 | 58.028 | 14.462 | 57.993 | 9.299 |
| SBMR-MV-ridge-cv-W15-trend-D4-BG-STACK-chi | 25.860 | 58.994 | 21.088 | 59.509 | 10.347 |
| SBMR-MV-ridge-cv-W15-trend-D6-BG-BAG | 17.784 | 57.440 | 16.034 | 56.868 | 9.303 |
| SBMR-MV-ridge-cv-W15-trend-D6-BG-BAG-chi | 18.816 | 58.599 | 13.885 | 57.084 | 9.833 |
| SBMR-MV-ridge-cv-W15-trend-D6-BG-STACK | 24.287 | 58.777 | 27.667 | 60.622 | 10.238 |
| SBMR-MV-ridge-cv-W15-trend-D6-BG-STACK-chi | 28.181 | 63.833 | 22.487 | 65.002 | 11.692 |
| SBMR-MV-ridge-cv-W8-trend-D6-BG-BAG | 141.573 | 543.507 | 105.930 | 566.511 | 79.861 |
| SBMR-MV-ridge-cv-W8-trend-D6-BG-STACK | 157.743 | 546.896 | 145.858 | 560.055 | 79.709 |
| Dataset | ArticularyWordRecognition | EthanolConcentration | FingerMovements | SelfRegulationSCP1 | UWaveGestureLibrary |
|---|---|---|---|---|---|
| Algorithm | |||||
| LSTM-attention-cnn2-ep20-bag | 0.470 | 0.255 | 0.530 | 0.853 | 0.394 |
| LSTM-attention-cnn2-ep20-stack | 0.927 | 0.266 | 0.500 | 0.498 | 0.653 |
| LSTM-attention-cnn2-ep50-bag | 0.647 | 0.266 | 0.600 | 0.860 | 0.531 |
| LSTM-attention-cnn2-ep50-stack | 0.907 | 0.346 | 0.530 | 0.853 | 0.756 |
| LSTM-CNN-ep20-stack | 0.913 | 0.247 | 0.510 | 0.638 | 0.562 |
| LSTM-CNN-ep50-stack | 0.887 | 0.327 | 0.480 | 0.686 | 0.625 |
| Dataset | ArticularyWordRecognition | EthanolConcentration | FingerMovements | SelfRegulationSCP1 | UWaveGestureLibrary |
|---|---|---|---|---|---|
| Algorithm | |||||
| LSTM-attention-cnn2-ep20-bag | 30.993 | 85.282 | 17.878 | 96.897 | 17.104 |
| LSTM-attention-cnn2-ep20-stack | 12.719 | 48.457 | 11.653 | 32.342 | 10.190 |
| LSTM-attention-cnn2-ep50-bag | 45.070 | 71.921 | 19.318 | 73.720 | 26.229 |
| LSTM-attention-cnn2-ep50-stack | 22.647 | 67.122 | 14.955 | 38.722 | 16.491 |
| LSTM-CNN-ep20-stack | 10.642 | 32.644 | 9.252 | 19.476 | 8.247 |
| LSTM-CNN-ep50-stack | 13.897 | 119.332 | 12.221 | 45.039 | 13.111 |
| Dataset | ArticularyWordRecognition | EthanolConcentration | FingerMovements | SelfRegulationSCP1 | UWaveGestureLibrary |
|---|---|---|---|---|---|
| Algorithm | |||||
| CNNClassifier-ep20 | 0.083 | 0.255 | 0.460 | 0.846 | 0.191 |
| CNNClassifier-ep200 | 0.457 | 0.297 | 0.530 | 0.877 | 0.806 |
| EncoderClassifier-ep20 | 0.847 | 0.232 | 0.540 | 0.812 | 0.734 |
| IInception-ep20 | 0.920 | 0.255 | 0.480 | 0.778 | 0.456 |
| IndividualLITEClassifier-ep100 | 0.987 | 0.281 | 0.550 | 0.696 | 0.228 |
| IndividualLITEClassifier-ep50 | 0.930 | 0.308 | 0.510 | 0.884 | 0.244 |
| ResNet-ep20 | 0.163 | 0.247 | 0.460 | 0.502 | 0.125 |
| Dataset | ArticularyWordRecognition | EthanolConcentration | FingerMovements | SelfRegulationSCP1 | UWaveGestureLibrary |
|---|---|---|---|---|---|
| Algorithm | |||||
| CNNClassifier-ep20 | 1.969 | 3.426 | 1.835 | 2.652 | 1.631 |
| CNNClassifier-ep200 | 9.026 | 20.174 | 6.734 | 14.693 | 6.750 |
| EncoderClassifier-ep20 | 41.563 | 334.121 | 27.218 | 199.412 | 37.719 |
| IInception-ep20 | 52.853 | 414.763 | 32.317 | 222.469 | 46.094 |
| IndividualLITEClassifier-ep100 | 38.759 | 257.808 | 28.145 | 144.833 | 31.622 |
| IndividualLITEClassifier-ep50 | 20.604 | 138.162 | 16.901 | 74.098 | 17.632 |
| ResNet-ep20 | 33.791 | 283.649 | 19.412 | 147.360 | 31.053 |
| Dataset | ArticularyWordRecognition | EthanolConcentration | FingerMovements | SelfRegulationSCP1 | UWaveGestureLibrary |
|---|---|---|---|---|---|
| Algorithm | |||||
| ROCKET | 0.993 | 0.426 | 0.530 | 0.850 | 0.934 |
| miniRocket | 0.990 | 0.475 | 0.500 | 0.918 | 0.941 |
| Dataset | ArticularyWordRecognition | EthanolConcentration | FingerMovements | SelfRegulationSCP1 | UWaveGestureLibrary |
|---|---|---|---|---|---|
| Algorithm | |||||
| ROCKET | 40.319 | 284.413 | 10.389 | 215.686 | 43.534 |
| miniRocket | 3.452 | 16.734 | 1.318 | 12.168 | 3.523 |
References
- Chaovalitwongse, W.A.; Prokopyev, O.A.; Pardalos, P.M. Electroencephalogram (EEG) time series classification: Applications in epilepsy. Ann. Oper. Res. 2006, 148, 227–250. [Google Scholar] [CrossRef]
- Arul, M.; Kareem, A. Applications of shapelet transform to time series classification of earthquake, wind and wave data. Eng. Struct. 2021, 228, 111564. [Google Scholar] [CrossRef]
- Potamitis, I. Classifying insects on the fly. Ecol. Inform. 2014, 21, 40–49. [Google Scholar] [CrossRef]
- Susto, G.A.; Cenedese, A.; Terzi, M. Chapter 9—Time-Series Classification Methods: Review and Applications to Power Systems Data. In Big Data Application in Power Systems; Arghandeh, R., Zhou, Y., Eds.; Elsevier: Amsterdam, The Netherlands, 2018; pp. 179–220. [Google Scholar] [CrossRef]
- Ismail Fawaz, H.; Forestier, G.; Weber, J.; Idoumghar, L.; Muller, P.A. Evaluating surgical skills from kinematic data using convolutional neural networks. In Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2018: 21st International Conference, Granada, Spain, 16–20 September 2018; Proceedings, Part IV 11. Springer: Berlin/Heidelberg, Germany, 2018; pp. 214–221. [Google Scholar]
- Tao, L.; Elhamifar, E.; Khudanpur, S.; Hager, G.D.; Vidal, R. Sparse hidden markov models for surgical gesture classification and skill evaluation. In Proceedings of the Information Processing in Computer-Assisted Interventions: Third International Conference, IPCAI 2012, Pisa, Italy, 27 June 2012; Proceedings 3. Springer: Berlin/Heidelberg, Germany, 2012; pp. 167–177. [Google Scholar]
- Forestier, G.; Lalys, F.; Riffaud, L.; Trelhu, B.; Jannin, P. Classification of surgical processes using dynamic time warping. J. Biomed. Inform. 2012, 45, 255–264. [Google Scholar] [CrossRef]
- Devanne, M.; Wannous, H.; Berretti, S.; Pala, P.; Daoudi, M.; Del Bimbo, A. 3-D human action recognition by shape analysis of motion trajectories on riemannian manifold. IEEE Trans. Cybern. 2014, 45, 1340–1352. [Google Scholar] [CrossRef]
- Ji, S.; Xu, W.; Yang, M.; Yu, K. 3D convolutional neural networks for human action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 35, 221–231. [Google Scholar] [CrossRef] [PubMed]
- Pinto, J.P.; Pimenta, A.; Novais, P. Deep learning and multivariate time series for cheat detection in video games. Mach. Learn. 2021, 110, 3037–3057. [Google Scholar] [CrossRef]
- Younis, R.; Zerr, S.; Ahmadi, Z. Multivariate time series analysis: An interpretable cnn-based model. In Proceedings of the 2022 IEEE 9th International Conference on Data Science and Advanced Analytics (DSAA), Shenzhen, China, 13–16 October 2022; IEEE: Piscataway Township, NJ, USA, 2022; pp. 1–10. [Google Scholar]
- Madrid, F.; Singh, S.; Chesnais, Q.; Mauck, K.; Keogh, E. Matrix profile xvi: Efficient and effective labeling of massive time series archives. In Proceedings of the 2019 IEEE International Conference on Data Science and Advanced Analytics (DSAA), Washington, DC, USA, 5–8 October 2019; IEEE: Piscataway Township, NJ, USA, 2019; pp. 463–472. [Google Scholar]
- Devanne, M.; Rémy-Néris, O.; Le Gals-Garnett, B.; Kermarrec, G.; Thepaut, A. A co-design approach for a rehabilitation robot coach for physical rehabilitation based on the error classification of motion errors. In Proceedings of the 2018 Second IEEE International Conference on Robotic Computing (IRC), Laguna Hills, CA, USA, 31 January–2 February 2018; IEEE: Piscataway Township, NJ, USA, 2018; pp. 352–357. [Google Scholar]
- Schäfer, P.; Högqvist, M. SFA: A symbolic fourier approximation and index for similarity search in high dimensional datasets. In Proceedings of the 15th International Conference on Extending Database Technology, Berlin, Germany, 27–30 March 2012; ACM: New York, NY, USA, 2012; pp. 516–527. [Google Scholar]
- Glenis, A.; Vouros, G.A. SCALE-BOSS-MR: Scalable Time Series Classification Using Multiple Symbolic Representations. Appl. Sci. 2024, 14, 689. [Google Scholar] [CrossRef]
- Lin, J.; Keogh, E.; Wei, L.; Lonardi, S. Experiencing SAX: A novel symbolic representation of time series. Data Min. Knowl. Discov. 2007, 15, 107–144. [Google Scholar] [CrossRef]
- Senin, P.; Malinchik, S. Sax-vsm: Interpretable time series classification using sax and vector space model. In Proceedings of the 2013 IEEE 13th International Conference on Data Mining, Dallas, TX, USA, 7–10 December 2013; IEEE: Piscataway Township, NJ, USA, 2013; pp. 1175–1180. [Google Scholar]
- Schäfer, P. The BOSS is concerned with time series classification in the presence of noise. Data Min. Knowl. Discov. 2015, 29, 1505–1530. [Google Scholar] [CrossRef]
- Schäfer, P. Scalable time series classification. Data Min. Knowl. Discov. 2016, 30, 1273–1298. [Google Scholar]
- Schäfer, P.; Leser, U. Fast and accurate time series classification with weasel. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, Singapore, 6–10 November 2017; ACM: New York, NY, USA, 2017; pp. 637–646. [Google Scholar]
- Glenis, A.; Vouros, G.A. Balancing between scalability and accuracy in time-series classification for stream and batch settings. In Proceedings of the Discovery Science: 23rd International Conference, DS 2020, Thessaloniki, Greece, 19–21 October 2020; Proceedings 23. Springer: Berlin/Heidelberg, Germany, 2020; pp. 265–279. [Google Scholar]
- Nguyen, T.L.; Ifrim, G. MrSQM: Fast time series classification with symbolic representations. arXiv 2021, arXiv:2109.01036. [Google Scholar]
- Nguyen, T.L.; Ifrim, G. Fast time series classification with random symbolic subsequences. In Proceedings of the International Workshop on Advanced Analytics and Learning on Temporal Data, Grenoble, France, 19–23 September 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 50–65. [Google Scholar]
- Glenis, A.; Vouros, G.A. SCALE-BOSS: A framework for scalable time-series classification using symbolic representations. In Proceedings of the 12th Hellenic Conference on Artificial Intelligence, Corfu, Greece, 7–9 September 2022; pp. 1–9. [Google Scholar]
- Schäfer, P.; Leser, U. WEASEL 2.0–A Random Dilated Dictionary Transform for Fast, Accurate and Memory Constrained Time Series Classification. arXiv 2023, arXiv:2301.10194. [Google Scholar]
- Schäfer, P.; Leser, U. Multivariate time series classification with WEASEL+ MUSE. arXiv 2017, arXiv:1711.11343. [Google Scholar]
- Dempster, A.; Petitjean, F.; Webb, G.I. ROCKET: Exceptionally fast and accurate time series classification using random convolutional kernels. Data Min. Knowl. Discov. 2020, 34, 1454–1495. [Google Scholar] [CrossRef]
- Dempster, A.; Schmidt, D.F.; Webb, G.I. Minirocket: A very fast (almost) deterministic transform for time series classification. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, Singapore, 14–18 August 2021; pp. 248–257. [Google Scholar]
- Wang, Z.; Yan, W.; Oates, T. Time series classification from scratch with deep neural networks: A strong baseline. In Proceedings of the 2017 International Joint Conference on Neural Networks (IJCNN), Anchorage, AK, USA, 14–19 May 2017; IEEE: Piscataway Township, NJ, USA, 2017; pp. 1578–1585. [Google Scholar]
- Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
- Nair, V.; Hinton, G.E. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), Haifa, Israel, 21–24 June 2010; pp. 807–814. [Google Scholar]
- Ioffe, S. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv 2015, arXiv:1502.03167. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Monti, F.; Frasca, F.; Eynard, D.; Mannion, D.; Bronstein, M.M. Fake news detection on social media using geometric deep learning. arXiv 2019, arXiv:1902.06673. [Google Scholar] [CrossRef]
- Zhao, B.; Lu, H.; Chen, S.; Liu, J.; Wu, D. Convolutional neural networks for time series classification. J. Syst. Eng. Electron. 2017, 28, 162–169. [Google Scholar] [CrossRef]
- Serra, J.; Pascual, S.; Karatzoglou, A. Towards a universal neural network encoder for time series. In Artificial Intelligence Research and Development; IOS Press: Amsterdam, The Netherlands, 2018; pp. 120–129. [Google Scholar]
- Ismail Fawaz, H.; Lucas, B.; Forestier, G.; Pelletier, C.; Schmidt, D.F.; Weber, J.; Webb, G.I.; Idoumghar, L.; Muller, P.A.; Petitjean, F. Inceptiontime: Finding alexnet for time series classification. Data Min. Knowl. Discov. 2020, 34, 1936–1962. [Google Scholar] [CrossRef]
- Ismail-Fawaz, A.; Devanne, M.; Weber, J.; Forestier, G. Deep learning for time series classification using new hand-crafted convolution filters. In Proceedings of the 2022 IEEE International Conference on Big Data (Big Data), Osaka, Japan, 17–20 December 2022; IEEE: Piscataway Township, NJ, USA, 2022; pp. 972–981. [Google Scholar]
- Ismail-Fawaz, A.; Devanne, M.; Berretti, S.; Weber, J.; Forestier, G. Lite: Light inception with boosting techniques for time series classification. In Proceedings of the 2023 IEEE 10th International Conference on Data Science and Advanced Analytics (DSAA), Thessaloniki, Greece, 9–13 October 2023; IEEE: Piscataway Township, NJ, USA, 2023; pp. 1–10. [Google Scholar]
- Cui, Z.; Chen, W.; Chen, Y. Multi-scale convolutional neural networks for time series classification. arXiv 2016, arXiv:1603.06995. [Google Scholar] [CrossRef]
- Wickstrøm, K.; Kampffmeyer, M.; Mikalsen, K.Ø.; Jenssen, R. Mixing up contrastive learning: Self-supervised representation learning for time series. Pattern Recognit. Lett. 2022, 155, 54–61. [Google Scholar] [CrossRef]
- Ordonez, P.; Armstrong, T.; Oates, T.; Fackler, J. Using modified multivariate bag-of-words models to classify physiological data. In Proceedings of the 2011 IEEE 11th International Conference on Data Mining Workshops, Vancouver, BC, Canada, 11 December 2011; IEEE: Piscataway Township, NJ, USA, 2011; pp. 534–539. [Google Scholar]
- Baydogan, M.G.; Runger, G. Learning a symbolic representation for multivariate time series classification. Data Min. Knowl. Discov. 2015, 29, 400–422. [Google Scholar] [CrossRef]
- Ruiz, A.P.; Flynn, M.; Large, J.; Middlehurst, M.; Bagnall, A. The great multivariate time series classification bake off: A review and experimental evaluation of recent algorithmic advances. Data Min. Knowl. Discov. 2021, 35, 401–449. [Google Scholar] [CrossRef] [PubMed]
- Shervashidze, N.; Schweitzer, P.; Van Leeuwen, E.J.; Mehlhorn, K.; Borgwardt, K.M. Weisfeiler-Lehman graph kernels. J. Mach. Learn. Res. 2011, 12, 2539–2561. [Google Scholar]
- LeCun, Y. Generalization and network design strategies. Connect. Perspect. 1989, 19, 18. [Google Scholar]
- LeCun, Y.; Bengio, Y. Convolutional networks for images, speech, and time series. Handb. Brain Theory Neural Netw. 1995, 3361, 1995. [Google Scholar]
- Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning Internal Representations by Error Propagation. In Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Vol. 1: Foundations; MIT Press: Cambridge, MA, USA, 1986; pp. 318–362. [Google Scholar]
- Elman, J.L. Finding structure in time. Cogn. Sci. 1990, 14, 179–211. [Google Scholar] [CrossRef]
- Jordan, M.I. Serial order: A parallel distributed processing approach. In Advances in Psychology; Elsevier: Amsterdam, The Netherlands, 1997; Volume 121, pp. 471–495. [Google Scholar]
- Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
- Bahdanau, D.; Cho, K.; Bengio, Y. Neural machine translation by jointly learning to align and translate. arXiv 2014, arXiv:1409.0473. [Google Scholar]
- Sugiyama, M.; Borgwardt, K. Halting in random walk kernels. Adv. Neural Inf. Process. Syst. 2015, 28, 1639–1647. [Google Scholar]
- Faouzi, J.; Janati, H. pyts: A Python Package for Time Series Classification. J. Mach. Learn. Res. 2020, 21, 1–6. [Google Scholar]
- Siglidis, G.; Nikolentzos, G.; Limnios, S.; Giatsidis, C.; Skianis, K.; Vazirgiannis, M. Grakel: A graph kernel library in python. J. Mach. Learn. Res. 2020, 21, 1–5. [Google Scholar]
- Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
- Chollet, F. Deep Learning with Python; Simon and Schuster: New York, NY, USA, 2021. [Google Scholar]
- Data61, C. StellarGraph Machine Learning Library. 2018. Available online: https://github.com/stellargraph/stellargraph (accessed on 9 September 2025).
- Dau, H.A.; Bagnall, A.; Kamgar, K.; Yeh, C.C.M.; Zhu, Y.; Gharghabi, S.; Ratanamahatana, C.A.; Keogh, E. The UCR time series archive. IEEE/CAA J. Autom. Sin. 2019, 6, 1293–1305. [Google Scholar] [CrossRef]
- Middlehurst, M.; Ismail-Fawaz, A.; Guillaume, A.; Holder, C.; Guijo-Rubio, D.; Bulatova, G.; Tsaprounis, L.; Mentel, L.; Walter, M.; Schäfer, P.; et al. aeon: A Python toolkit for learning from time series. J. Mach. Learn. Res. 2024, 25, 1–10. [Google Scholar]















| Name | Train_Size | Test_Size | n_Classes | n_Timestamps |
|---|---|---|---|---|
| Crop | 7200 | 16,800 | 24 | 46 |
| FordB | 3636 | 810 | 2 | 500 |
| FordA | 3601 | 1320 | 2 | 500 |
| NonInvasiveFetalECGThorax2 | 1800 | 1965 | 42 | 750 |
| NonInvasiveFetalECGThorax1 | 1800 | 1965 | 42 | 750 |
| PhalangesOutlinesCorrect | 1800 | 858 | 2 | 80 |
| HandOutlines | 1000 | 370 | 2 | 2709 |
| TwoPatterns | 1000 | 4000 | 4 | 128 |
| Name | Train_Size | Test_Size | NumDimensions | n_Timestamps | n_Classes |
|---|---|---|---|---|---|
| ArticularyWordRecognition | 275 | 300 | 9 | 144 | 25 |
| EthanolConcentration | 261 | 263 | 3 | 1751 | 4 |
| FingerMovements | 316 | 100 | 28 | 50 | 2 |
| SelfRegulationSCP1 | 268 | 293 | 6 | 896 | 2 |
| UWaveGestureLibrary | 120 | 320 | 3 | 315 | 8 |
| Accuracy_Mean | Accuracy_std | Total_Time_Mean | Total_Time_std | |
|---|---|---|---|---|
| Algorithm | ||||
| BOSS-RF | 0.796 | 0.088 | 9.7 | 5.5 |
| MB-K-BOSS-VS | 0.727 | 0.101 | 11.5 | 6.9 |
| Accuracy_Mean | Accuracy_Pop_std | Total_Time_Mean | Total_Time_Pop_std | |
|---|---|---|---|---|
| Algorithm | ||||
| WEASEL_V2 | 0.904 | 0.078 | 339.6 | 189.9 |
| Rocket | 0.901 | 0.084 | 309.3 | 167.7 |
| MiniRocket | 0.900 | 0.082 | 18.7 | 20.1 |
| MRSQM_nsax0_nsfa5 | 0.889 | 0.093 | 418.8 | 239.4 |
| MRSQM_nsax1_nsfa1 | 0.878 | 0.101 | 176.2 | 101.1 |
| MRSQM_nsax0_nsfa1 | 0.861 | 0.109 | 117.2 | 76.1 |
| MRSQM_nsax1_nsfa0 | 0.839 | 0.113 | 104.2 | 59.0 |
| Accuracy_Mean | Accuracy_Pop_std | Total_Time_Mean | Total_Time_Pop_std | |
|---|---|---|---|---|
| Algorithm | ||||
| SVCcv-2_2.0_3.0_3.5 | 0.699 | 0.079 | 24.699 | 12.783 |
| SVC-2_3.5 | 0.697 | 0.079 | 20.734 | 9.494 |
| SVC-2_1.0 | 0.680 | 0.089 | 21.094 | 9.576 |
| Accuracy_Mean | Accuracy_Pop_std | Total_Time_Mean | Total_Time_Pop_std | |
|---|---|---|---|---|
| Algorithm | ||||
| IndividualLITEClassifier-ep100 | 0.819 | 0.121 | 599.293 | 408.343 |
| EncoderClassifier-ep20 | 0.811 | 0.096 | 1045.826 | 665.514 |
| CNNClassifier-ep200 | 0.794 | 0.110 | 59.852 | 25.028 |
| IndividualLITEClassifier-ep50 | 0.770 | 0.139 | 328.929 | 223.978 |
| ResNet-ep20 | 0.705 | 0.181 | 881.376 | 589.755 |
| IInception-ep20 | 0.642 | 0.242 | 1117.784 | 784.749 |
| IndividualLITEClassifier-ep20 | 0.634 | 0.222 | 135.210 | 90.210 |
| FCNClassifier-ep20 | 0.546 | 0.315 | 385.911 | 282.955 |
| MLPClassifier-ep200 | 0.493 | 0.225 | 66.662 | 33.646 |
| CNNClassifier-ep20 | 0.365 | 0.293 | 6.757 | 2.414 |
| MLPClassifier-ep20 | 0.353 | 0.264 | 7.605 | 3.389 |
| Accuracy_Mean | Accuracy_Pop_std | Total_Time_Mean | Total_Time_Pop_std | |
|---|---|---|---|---|
| Algorithm | ||||
| LSTM-attention-cnn2-ep50 | 0.805 | 0.106 | 157.001 | 147.923 |
| LSTM-attention-cnn-ep50 | 0.798 | 0.103 | 302.599 | 360.341 |
| LSTM-attention-ep50 | 0.798 | 0.102 | 610.468 | 556.662 |
| LSTM-attention-cnn2-ep20 | 0.795 | 0.107 | 94.234 | 73.159 |
| LSTM-attention-cnn-ep20 | 0.792 | 0.100 | 175.713 | 143.131 |
| LSTM-attention-ep20 | 0.783 | 0.107 | 391.999 | 334.938 |
| LSTM-CNN-ep50 | 0.718 | 0.130 | 138.903 | 102.731 |
| LSTM-CNN-ep20 | 0.709 | 0.123 | 118.926 | 95.803 |
| GCN-ep20 | 0.681 | 0.115 | 576.484 | 825.842 |
| LSTM-CNN-no-bn-ep50 | 0.680 | 0.171 | 181.424 | 149.712 |
| LSTM-CNN-no-bn-ep20 | 0.665 | 0.195 | 109.958 | 79.093 |
| Name | Window_Configs (Window Sizes) | Window Step |
|---|---|---|
| W0 | 24 | 1 |
| W8 | 4, 8, 12, 16, 20, 24, 28, 32, 36, 40 | 1 |
| W11 | 12, 16, 20, 24, 28, 32, 36, 40 | 4 |
| W14 | 12, 16, 20, 24, 28, 32 | 2 |
| W15 | 12, 16, 20, 24, 28, 32, 36, 40 | 8 |
| Name | Dilation_Filter_Configs |
|---|---|
| D0 | 1 (no dilation) |
| D2 | 1, 7, 9, 11 |
| D4 | 1, 11 |
| D6 | 1, 9 |
| Accuracy_Mean | Accuracy_Pop_std | Total_Time_Mean | Total_Time_Pop_std | |
|---|---|---|---|---|
| Algorithm | ||||
| LSTM-attention-cnn2-ep50-stack | 0.678 | 0.210 | 31.987 | 19.480 |
| LSTM-CNN-ep50-stack | 0.601 | 0.189 | 40.720 | 41.213 |
| LSTM-attention-cnn2-ep50-bag | 0.581 | 0.192 | 47.251 | 22.522 |
| LSTM-CNN-ep20-stack | 0.574 | 0.215 | 16.052 | 9.202 |
| LSTM-attention-cnn2-ep20-stack | 0.569 | 0.217 | 23.072 | 15.059 |
| LSTM-attention-cnn2-ep20-bag | 0.500 | 0.199 | 49.631 | 34.406 |
| Accuracy_Mean | Accuracy_Pop_std | Total_Time_Mean | Total_Time_Pop_std | |
|---|---|---|---|---|
| Algorithm | ||||
| EncoderClassifier-ep20 | 0.633 | 0.227 | 128.007 | 121.132 |
| CNNClassifier-ep200 | 0.593 | 0.218 | 11.475 | 5.231 |
| IInception-ep20 | 0.578 | 0.239 | 153.699 | 147.896 |
| IndividualLITEClassifier-ep50 | 0.575 | 0.285 | 53.479 | 47.540 |
| IndividualLITEClassifier-ep100 | 0.548 | 0.279 | 100.233 | 90.002 |
| CNNClassifier-ep20 | 0.367 | 0.269 | 2.303 | 0.658 |
| ResNet-ep20 | 0.299 | 0.154 | 103.053 | 101.543 |
| Accuracy_Mean | Accuracy_Pop_std | Total_Time_Mean | Total_Time_Pop_std | |
|---|---|---|---|---|
| Algorithm | ||||
| SBMR-MV-ridge-cv-W0-trend-D2-BG-STACK | 0.700 | 0.269 | 59.023 | 43.971 |
| SBMR-MV-RF-W0-trend-D4-BG-STACK | 0.691 | 0.216 | 29.784 | 22.312 |
| SBMR-MV-ridge-cv-W0-trend-D4-BG-STACK | 0.684 | 0.246 | 29.422 | 22.086 |
| SBMR-MV-RF-W0-trend-D2-BG-STACK | 0.684 | 0.237 | 59.453 | 44.198 |
| SBMR-MV-ridge-cv-W0-trend-D2-BG-BAG | 0.682 | 0.238 | 57.667 | 44.877 |
| SBMR-MV-RF-W0-trend-D6-UG-STACK | 0.667 | 0.239 | 26.334 | 20.425 |
| SBMR-MV-RF-W0-trend-D0-UG-STACK | 0.657 | 0.225 | 13.416 | 10.537 |
| SBMR-MV-RF-W0-noTrend-D0-UG-STACK | 0.650 | 0.192 | 6.987 | 5.445 |
| SBMR-MV-RF-W0-trend-D2-BG-BAG | 0.647 | 0.183 | 60.185 | 47.299 |
| SBMR-MV-ridge-cv-W0-trend-D6-UG-BAG | 0.646 | 0.230 | 25.831 | 20.542 |
| SBMR-MV-ridge-cv-W0-trend-D4-BG-BAG | 0.646 | 0.224 | 28.723 | 22.514 |
| SBMR-MV-ridge-cv-W0-trend-D0-UG-STACK | 0.643 | 0.226 | 13.293 | 10.460 |
| SBMR-MV-RF-W0-trend-D4-BG-BAG | 0.641 | 0.187 | 29.460 | 23.218 |
| SBMR-MV-RF-W0-trend-D6-UG-BAG | 0.637 | 0.194 | 27.080 | 21.417 |
| SBMR-MV-RF-W0-trend-D0-UG-BAG | 0.631 | 0.173 | 13.208 | 10.507 |
| SBMR-MV-ridge-cv-W0-noTrend-D0-UG-STACK | 0.629 | 0.227 | 6.847 | 5.475 |
| SBMR-MV-ridge-cv-W0-trend-D0-UG-BAG | 0.576 | 0.196 | 12.982 | 10.392 |
| SBMR-MV-RF-W0-noTrend-D0-UG-BAG | 0.571 | 0.170 | 6.870 | 5.493 |
| SBMR-MV-ridge-cv-W0-noTrend-D0-UG-BAG | 0.549 | 0.196 | 6.770 | 5.463 |
| Accuracy_Mean | Accuracy_Pop_std | Total_Time_Mean | Total_Time_Pop_std | |
|---|---|---|---|---|
| Algorithm | ||||
| SBMR-MV-ridge-cv-W15-trend-D2-BG-BAG-chi | 0.733 | 0.217 | 62.364 | 41.879 |
| SBMR-MV-ridge-cv-W15-trend-D2-BG-STACK-chi | 0.730 | 0.242 | 71.386 | 39.949 |
| SBMR-MV-ridge-cv-W15-trend-D6-BG-STACK-chi | 0.729 | 0.253 | 38.239 | 22.024 |
| SBMR-MV-ridge-cv-W15-trend-D6-BG-BAG-chi | 0.725 | 0.245 | 31.643 | 21.584 |
| SBMR-MV-ridge-cv-W11-trend-D6-BG-STACK-chi | 0.722 | 0.262 | 64.418 | 41.776 |
| SBMR-MV-ridge-cv-W11-trend-D6-BG-BAG-chi | 0.718 | 0.233 | 59.652 | 43.653 |
| SBMR-MV-ridge-cv-W11-trend-D4-BG-STACK-chi | 0.717 | 0.250 | 64.460 | 42.175 |
| SBMR-MV-ridge-cv-W11-trend-D6-BG-STACK | 0.717 | 0.268 | 67.959 | 41.675 |
| SBMR-MV-ridge-cv-W15-trend-D4-BG-STACK-chi | 0.715 | 0.251 | 35.160 | 20.303 |
| SBMR-MV-ridge-cv-W15-trend-D6-BG-BAG | 0.713 | 0.254 | 31.486 | 21.149 |
| SBMR-MV-ridge-cv-W8-trend-D6-BG-STACK | 0.713 | 0.263 | 298.052 | 210.282 |
| SBMR-MV-ridge-cv-W15-trend-D6-BG-STACK | 0.710 | 0.268 | 36.318 | 19.974 |
| SBMR-MV-ridge-cv-W11-trend-D4-BG-BAG-chi | 0.709 | 0.231 | 59.747 | 43.973 |
| SBMR-MV-ridge-cv-W11-trend-D6-BG-BAG | 0.708 | 0.253 | 75.683 | 56.347 |
| SBMR-MV-ridge-cv-W14-trend-D6-BG-STACK | 0.703 | 0.265 | 89.956 | 61.231 |
| SBMR-MV-ridge-cv-W15-trend-D4-BG-BAG-chi | 0.699 | 0.248 | 31.711 | 21.682 |
| SBMR-MV-RF-W15-trend-D6-BG-STACK | 0.698 | 0.213 | 37.453 | 20.296 |
| SBMR-MV-ridge-cv-W8-trend-D6-BG-BAG | 0.695 | 0.257 | 287.476 | 219.437 |
| SBMR-MV-RF-W11-trend-D6-BG-BAG | 0.687 | 0.201 | 59.510 | 42.949 |
| SBMR-MV-RF-W11-trend-D6-BG-STACK | 0.670 | 0.253 | 67.362 | 41.766 |
| SBMR-MV-RF-W15-trend-D6-BG-BAG | 0.651 | 0.188 | 32.324 | 21.439 |
| Accuracy_Mean | Accuracy_Pop_std | Total_Time_Mean | Total_Time_Pop_std | |
|---|---|---|---|---|
| Algorithm | ||||
| miniRocket | 0.765 | 0.228 | 7.439 | 5.958 |
| ROCKET | 0.747 | 0.227 | 118.868 | 109.902 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Glenis, A.; Vouros, G. Scalable Univariate and Multivariate Time-Series Classifiers with Deep Learning Methods Exploiting Symbolic Representations. Computers 2025, 14, 563. https://doi.org/10.3390/computers14120563
Glenis A, Vouros G. Scalable Univariate and Multivariate Time-Series Classifiers with Deep Learning Methods Exploiting Symbolic Representations. Computers. 2025; 14(12):563. https://doi.org/10.3390/computers14120563
Chicago/Turabian StyleGlenis, Apostolos, and George Vouros. 2025. "Scalable Univariate and Multivariate Time-Series Classifiers with Deep Learning Methods Exploiting Symbolic Representations" Computers 14, no. 12: 563. https://doi.org/10.3390/computers14120563
APA StyleGlenis, A., & Vouros, G. (2025). Scalable Univariate and Multivariate Time-Series Classifiers with Deep Learning Methods Exploiting Symbolic Representations. Computers, 14(12), 563. https://doi.org/10.3390/computers14120563


