Optimising Text Classification in Social Networks via Deep Learning-Based Dimensionality Reduction
Abstract
1. Introduction
- RQ1: Can dimensionality reduction techniques improve text classification performance in terms of accuracy, processing time, and carbon emissions?
- RQ2: Are DL-based dimensionality reduction frameworks effective and worthwhile compared to traditional methods?
- RQ3: Do dimensionality reduction techniques provide a significant improvement in processing time while maintaining comparable accuracy levels?
- RQ4: Are all original features necessary to preserve classification accuracy, or can similar performance be achieved with fewer dimensions and reduced model complexity?
- The development of a robust and flexible DL-based framework for improving text classification in social networks from a computational (time and carbon emissions) perspective without affecting, and even improving, the accuracy of the classification.
- A thorough evaluation of the proposed DL-based framework. We assess the framework using two benchmark datasets, employing different dimensions, several machine learning algorithms, and traditional and DL-based dimensionality reduction techniques. We also provide an ablation study considering different components included in the proposal.
- An in-depth analysis of the trade-offs between dimensionality reduction and classification performance, providing insights into different configurations, classification times, and carbon emissions.
2. Related Work
3. DL-Based Framework for Optimising Text Classification in Social Networks
3.1. Text Preprocessing
- Retained elements: Stop words, punctuation, and emoticons (to preserve semantic and emotional content).
- Removed elements: HTML tags, embedded URLs (e.g., ‘http://’), geolocation fields, and other non-linguistic metadata.
- Dataset-specific cleaning: For IMDb reviews, delete any HTML artifacts while preserving sentence structure. For BullyingV3.0 tweets, additionally, remove user mentions or retweet markers and drop coordinate tags, focusing only on the message text.
3.2. Large Language Model Fine-Tuning
Dataset for Fine-Tuning the Large Language Model
3.3. Dimensionality Reduction
- Linear methods: PCA [53], Independent Component Analysis (ICA) [54], Locally Linear Embedding (LLE) [55], and Truncated SVD [56] are applied to reduce dimensionality. PCA and Truncated SVD find orthogonal projections that maximise variance, ICA seeks statistically independent components, and LLE preserves local neighbourhood geometry.
- Non-linear methods: We use Uniform Manifold Approximation and Projection (UMAP) [57], a dimensionality reduction technique grounded in Riemannian geometry and algebraic topology. UMAP constructs a fuzzy topological graph where each data point is connected to its nearest neighbours via weighted edges. It then optimises a low-dimensional embedding that preserves the local structure by maintaining pairwise relationships between connected points. In addition, we implement shallow autoencoder (AE) and variational autoencoder (VAE) models. An autoencoder is a type of neural network consisting of two main components: an encoder, which compresses the input into a latent representation, and a decoder, which reconstructs the original input from this representation. The VAE extends this concept by introducing probabilistic inference, aiming to model the distribution of the latent space—typically assuming a Gaussian prior—rather than producing a fixed latent vector. This makes VAEs more robust and capable of generalising beyond the training data. Both models compress data through a low-dimensional bottleneck and reconstruct the input, allowing them to serve as effective non-linear dimensionality reduction techniques. Our implementations use one or two hidden layers with ReLU activation functions and a sigmoid function in the output layer. The VAE includes stochastic latent variables with a KL-divergence regulariser. The training phase is conducted using the Adam optimiser and binary cross-entropy loss (Equation (1)) to minimise reconstruction error. Formally, for input vector x and reconstruction , the loss is:
3.4. Text Classification
4. Experimentation
4.1. Data
4.2. Model Configuration and Parameters
5. Ablation Study
5.1. IMDB Dataset Results
5.2. BullyingV3.0 Dataset Results
5.3. Time and Performance Analysis
6. Discussion
Practical Implications and Limitations
7. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
AE | Autoencoder |
Adam | Adaptive Moment Estimation |
BERT | Bidirectional Encoder Representations from Transformers |
CNN | Convolutional Neural Network |
DCT | Discrete Cosine Transform |
DL | Deep Learning |
GloVe | Global Vectors for Word Representation |
ICA | Independent Component Analysis |
KL | Kullback–Leibler Divergence |
LLE | Locally Linear Embedding |
LLM | Large Language Model |
LSA | Latent Semantic Analysis |
LSI | Latent Semantic Indexing |
KNN | K-Nearest Neighbour |
NLP | Natural Language Processing |
NMF | Non-negative Matrix Factorization |
NPM | New Performance Metric |
PCA | Principal Component Analysis |
ReLU | Rectified Linear Unit |
rRF | Removal of Redundant Feature |
RNN | Recurrent Neural Network |
SDG | Sustainable Development Goal |
SOM | Self-Organising Map |
SVD | Singular Value Decomposition |
SVM | Support Vector Machine |
t-SNE | t-distributed Stochastic Neighbour Embedding |
UMAP | Uniform Manifold Approximation and Projection |
VAE | Variational Autoencoder |
Word2Vec | Word to Vector |
Appendix A
Appendix A.1. IMDb Results
Appendix A.2. BullyingV3.0 Results
References
- Saad, A.I. Opinion mining on US Airline Twitter data using machine learning techniques. In Proceedings of the 2020 16th International Computer Engineering Conference (ICENCO), Cairo, Egypt, 29–30 December 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 59–63. [Google Scholar]
- Das, R.; Singh, T.D. A step towards sentiment analysis of assamese news articles using lexical features. In Proceedings of the International Conference on Computing and Communication Systems: I3CS 2020, NEHU, Shillong, India, 28–30 April 2020; Springer: Singapore, 2021; pp. 15–23. [Google Scholar]
- Rahat, A.M.; Kahir, A.; Masum, A.K.M. Comparison of Naive Bayes and SVM Algorithm based on sentiment analysis using review dataset. In Proceedings of the 2019 8th International Conference System Modeling and Advancement in Research Trends (SMART), Moradabad, India, 22–23 November 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 266–270. [Google Scholar]
- Daud, S.; Ullah, M.; Rehman, A.; Saba, T.; Damaševičius, R.; Sattar, A. Topic classification of online news articles using optimized machine learning models. Computers 2023, 12, 16. [Google Scholar] [CrossRef]
- Koufakou, A. Deep learning for opinion mining and topic classification of course reviews. Educ. Inf. Technol. 2024, 29, 2973–2997. [Google Scholar] [CrossRef]
- Ghiassi, M.; Lee, S.; Gaikwad, S.R. Sentiment analysis and spam filtering using the YAC2 clustering algorithm with transferability. Comput. Ind. Eng. 2022, 165, 107959. [Google Scholar] [CrossRef]
- Mageshkumar, N.; Vijayaraj, A.; Arunpriya, N.; Sangeetha, A. Efficient spam filtering through intelligent text modification detection using machine learning. Mater. Today Proc. 2022, 64, 848–858. [Google Scholar] [CrossRef]
- Abid, M.A.; Ullah, S.; Siddique, M.A.; Mushtaq, M.F.; Aljedaani, W.; Rustam, F. Spam SMS filtering based on text features and supervised machine learning techniques. Multimed. Tools Appl. 2022, 81, 39853–39871. [Google Scholar] [CrossRef]
- Supriyono; Wibawa, A.P.; Suyono; Kurniawan, F. Advancements in natural language processing: Implications, challenges, and future directions. Telemat. Inform. Rep. 2024, 16, 100173. [Google Scholar] [CrossRef]
- Zareapoor, M.; Seeja, K.R. Feature Extraction or Feature Selection for Text Classification: A Case Study on Phishing Email Detection. Int. J. Inf. Eng. Electron. Bus. 2015, 7, 60–65. [Google Scholar] [CrossRef]
- Kumar, K.V.; Srinivasan, R.; Singh, E.B. An efficient approach for dimensionality reduction and classification of high dimensional text documents. In Proceedings of the First International Conference on Data Science, E-Learning and Information Systems, Madrid, Spain, 1–2 October 2018; pp. 1–5. [Google Scholar]
- Vieira, A.S.; Diz, M.L.B.; Iglesias, E.L. Improving the text classification using clustering and a novel HMM to reduce the dimensionality. Comput. Methods Programs Biomed. 2016, 136, 119–130. [Google Scholar] [CrossRef]
- McAllister, R.; Sheppard, J. Taxonomic Dimensionality Reduction in Bayesian Text Classification. In Proceedings of the 2012 11th International Conference on Machine Learning and Applications, Boca Raton, FL, USA, 12–15 December 2012; IEEE: Piscataway, NJ, USA, 2012; Volume 1, pp. 508–513. [Google Scholar]
- Akritidis, L.; Bozanis, P. How dimensionality reduction affects sentiment analysis NLP tasks: An experimental study. In Proceedings of the IFIP International Conference on Artificial Intelligence Applications and Innovations, Hersonissos, Crete, Greece, 17–20 June 2022; Springer: Cham, Switzerland, 2022; pp. 301–312. [Google Scholar]
- Minaee, S.; Kalchbrenner, N.; Cambria, E.; Nikzad, N.; Chenaghlu, M.; Gao, J. Deep Learning–based Text Classification: A Comprehensive Review. ACM Comput. Surv. 2021, 54, 62. [Google Scholar] [CrossRef]
- Mikolov, T.; Sutskever, I.; Chen, K.; Corrado, G.S.; Dean, J. Distributed representations of words and phrases and their compositionality. Adv. Neural Inf. Process. Syst. 2013, 26, 1–9. [Google Scholar]
- Pennington, J.; Socher, R.; Manning, C.D. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar, 25–29 October 2014; pp. 1532–1543. [Google Scholar]
- Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, MN, USA, 2–7 June 2019. [Google Scholar]
- Evangelista, P.F.; Embrechts, M.J.; Szymański, B.K. Taming the Curse of Dimensionality in Kernels and Novelty Detection. In Proceedings of the Online World Conference on Soft Computing in Industrial Applications, Online, 20 September–8 October 2004. [Google Scholar]
- Hosseini, S.; Varzaneh, Z.A. Deep text clustering using stacked AutoEncoder. Multimed. Tools Appl. 2022, 81, 10861–10881. [Google Scholar] [CrossRef]
- Daneshfar, F.; Soleymanbaigi, S.; Nafisi, A.; Yamini, P. Elastic deep autoencoder for text embedding clustering by an improved graph regularization. Expert Syst. Appl. 2024, 238, 121780. [Google Scholar] [CrossRef]
- Heusinger, M.; Raab, C.; Schleif, F.M. Dimensionality reduction in the context of dynamic social media data streams. Evol. Syst. 2022, 13, 387–401. [Google Scholar] [CrossRef]
- Khan, J.; Ahmad, K.; Jagatheesaperumal, S.K.; Sohn, K.A. Textual variations in social media text processing applications: Challenges, solutions, and trends. Artif. Intell. Rev. 2025, 58, 89. [Google Scholar] [CrossRef]
- Singh, K.N.; Devi, S.D.; Devi, H.M.; Mahanta, A.K. A novel approach for dimension reduction using word embedding: An enhanced text classification approach. Int. J. Inf. Manag. Data Insights 2022, 2, 100061. [Google Scholar] [CrossRef]
- Zheng, W.; Qian, Y. Aggressive dimensionality reduction with reinforcement local feature selection for text categorization. In Proceedings of the Artificial Intelligence and Computational Intelligence: International Conference, AICI 2010, Sanya, China, 23–24 October 2010; Proceedings, Part I 2. Springer: Berlin/Heidelberg, Germany, 2010; pp. 365–372. [Google Scholar]
- Mohamed, A. An effective dimension reduction algorithm for clustering Arabic text. Egypt. Inform. J. 2020, 21, 1–5. [Google Scholar] [CrossRef]
- Walkowiak, T.; Datko, S.; Maciejewski, H. Reduction of dimensionality of feature vectors in subject classification of text documents. In Proceedings of the Reliability and Statistics in Transportation and Communication: Selected Papers from the 18th International Conference on Reliability and Statistics in Transportation and Communication, RelStat’18, Riga, Latvia, 17–20 October 2018; Springer: Cham, Switzerland, 2019; pp. 159–167. [Google Scholar]
- Bojanowski, P.; Grave, E.; Joulin, A.; Mikolov, T. Enriching word vectors with subword information. Trans. Assoc. Comput. Linguist. 2017, 5, 135–146. [Google Scholar] [CrossRef]
- Elhadad, M.K.; Badran, K.M.; Salama, G.I.M. A novel approach for ontology-based dimensionality reduction for web text document classification. In Proceedings of the 2017 IEEE/ACIS 16th International Conference on Computer and Information Science (ICIS), Wuhan, China, 24–26 May 2017; pp. 373–378. [Google Scholar]
- Corrêa, R.F.; Ludermir, T.B. Dimensionality Reduction by Semantic Mapping in Text Categorization. In Proceedings of the International Conference on Neural Information Processing, Calcutta, India, 22–25 November 2004. [Google Scholar]
- Swarnalatha, K.; Kumar, N.V.; Guru, D.S.; Anami, B.S. Analysis of Dimensionality Reduction Techniques for Effective Text Classification. In Proceedings of the 2021 International Conference on Intelligent Technologies (CONIT), Hubli, India, 25–27 June 2021; pp. 1–5. [Google Scholar]
- Davy, M.; Luz, S. Dimensionality reduction for active learning with nearest neighbour classifier in text categorisation problems. In Proceedings of the Sixth International Conference on Machine Learning and Applications (ICMLA 2007), Cincinnati, OH, USA, 13–15 December 2007; pp. 292–297. [Google Scholar]
- Chamorro-Padial, J.; Rodríguez-Sánchez, R. Text Categorisation Through Dimensionality Reduction Using Wavelet Transform. J. Inf. Knowl. Manag. 2020, 19, 2050039. [Google Scholar] [CrossRef]
- Saarikoski, J.; Laurikkala, J.; Järvelin, K.; Siermala, M.; Juhola, M. Dimensionality reduction in text classification using scatter method. Int. J. Data Min. Model. Manag. 2014, 6, 1–21. [Google Scholar] [CrossRef]
- Vieira, A.S.; Iglesias, E.L.; Diz, M.L.B. A New Dimensionality Reduction Technique Based on HMM for Boosting Document Classification. In Proceedings of the Practical Applications of Computational Biology & Bioinformatics, Salamanca, Spain, 3–5 June 2015. [Google Scholar]
- Kim, H.; Howland, P.; Park, H. Dimension Reduction in Text Classification with Support Vector Machines. J. Mach. Learn. Res. 2005, 6, 37–53. [Google Scholar]
- Yin, S.; Huang, Z.; Chen, L.; Qiu, Y. A Approach for Text Classification Feature Dimensionality Reduction and Rule Generation on Rough Set. In Proceedings of the 2008 3rd International Conference on Innovative Computing Information and Control, Dalian, China, 18–20 June 2008; p. 554. [Google Scholar]
- Durmaz, O.; Bilge, H.S. Effects of dimensionality reduction and feature selection in text classification. In Proceedings of the 2011 IEEE 19th Signal Processing and Communications Applications Conference (SIU), Antalya, Turkey, 20–22 April 2011; pp. 21–24. [Google Scholar]
- Jain, G.; Ginwala, A.; Aslandogan, A. An approach to text classification using dimensionality reduction and combination of classifiers. In Proceedings of the 2004 IEEE International Conference on Information Reuse and Integration, 2004. IRI 2004, Las Vegas, NV, USA, 8–10 November 2004; pp. 564–569. [Google Scholar]
- Dong, W. Mixed feature dimension reduction strategy for text categorization. J. Guizhou Norm. Coll. 2012, 28, 6–10. [Google Scholar]
- Boyapati, M.; Aygun, R. Semanformer: Semantics-aware Embedding Dimensionality Reduction Using Transformer-Based Models. In Proceedings of the 2024 IEEE 18th International Conference on Semantic Computing (ICSC), Laguna Hills, CA, USA, 5–7 February 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 134–141. [Google Scholar]
- Thor, F.; Nettelblad, C. Dimensionality reduction of genetic data using contrastive learning. Genetics 2025, iyaf068. [Google Scholar] [CrossRef] [PubMed]
- Nareklishvili, M.; Geitle, M. Deep ensemble transformers for dimensionality reduction. IEEE Trans. Neural Netw. Learn. Syst. 2024, 36, 2091–2102. [Google Scholar] [CrossRef] [PubMed]
- Sakr, C.; Khailany, B. Espace: Dimensionality reduction of activations for model compression. arXiv 2024, arXiv:2410.05437. [Google Scholar] [CrossRef]
- Khan, A.; Majumdar, D.; Mondal, B. Sentiment analysis of emoji fused reviews using machine learning and Bert. Sci. Rep. 2025, 15, 7538. [Google Scholar] [CrossRef] [PubMed]
- Dandannavar, P.; Mangalwede, S.; Deshpande, S. Emoticons and their effects on sentiment analysis of Twitter data. In Proceedings of the EAI International Conference on Big Data Innovation for Sustainable Cognitive Computing: BDCC 2018, Coimbatore, India, 13–15 December 2018; Springer: Cham, Switzerland, 2020; pp. 191–201. [Google Scholar]
- Maas, A.L.; Daly, R.E.; Pham, P.T.; Huang, D.; Ng, A.Y.; Potts, C. Learning Word Vectors for Sentiment Analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Portland, OR, USA, 19–24 June 2011; pp. 142–150. [Google Scholar]
- Bullying v3 Dataset. n.d. University of Wisconsin–Madison. Available online: https://research.cs.wisc.edu/bullying/data.html (accessed on 23 July 2025).
- Peters, M.E.; Neumann, M.; Iyyer, M.; Gardner, M.; Clark, C.; Lee, K.; Zettlemoyer, L. Deep Contextualized Word Representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), New Orleans, LA, USA, 1–6 June 2018; pp. 2227–2237. [Google Scholar]
- Radford, A.; Narasimhan, K.; Salimans, T.; Sutskever, I. Improving Language Understanding by Generative Pre-Training. 2018. Available online: https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf (accessed on 23 July 2025).
- Reimers, N.; Gurevych, I. Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing, Hong Kong, China, 3–7 November 2019; Association for Computational Linguistics: Stroudsburg, PA, USA, 2019; pp. 1–11. [Google Scholar] [CrossRef]
- Khosla, P.; Teterwak, P.; Wang, C.; Sarna, A.; Tian, Y.; Isola, P.; Maschinot, A.; Liu, C.; Krishnan, D. Supervised contrastive learning. Adv. Neural Inf. Process. Syst. 2020, 33, 18661–18673. [Google Scholar]
- Jolliffe, I.T. Principal component analysis for special types of data. In Principal Component Analysis; Springer Series in Statistics; Springer: New York, NY, USA, 2002. [Google Scholar]
- Comon, P. Independent component analysis, a new concept? Signal Process. 1994, 36, 287–314. [Google Scholar] [CrossRef]
- Roweis, S.T.; Saul, L.K. Nonlinear dimensionality reduction by locally linear embedding. Science 2000, 290, 2323–2326. [Google Scholar] [CrossRef]
- Hansen, P.C. The truncated SVD as a method for regularization. BIT Numer. Math. 1987, 27, 534–553. [Google Scholar] [CrossRef]
- McInnes, L.; Healy, J.; Melville, J. Umap: Uniform manifold approximation and projection for dimension reduction. arXiv 2018, arXiv:1802.03426. [Google Scholar]
- Cover, T.; Hart, P. Nearest neighbor pattern classification. IEEE Trans. Inf. Theory 1967, 13, 21–27. [Google Scholar] [CrossRef]
- McCullagh, P.; Nelder, J.A. Generalized Linear Models; Springer: Boston, MA, USA, 1989. [Google Scholar] [CrossRef]
- Chang, C.C.; Lin, C.J. LIBSVM: A Library for Support Vector Machines. ACM Trans. Intell. Syst. Technol. 2011, 2, 27. [Google Scholar] [CrossRef]
- Wang, H.; Zhang, H.; Yu, D. On the dimensionality of sentence embeddings. arXiv 2023, arXiv:2310.15285. [Google Scholar] [CrossRef]
- sentence-transformers/all-mpnet-base-v2 · Hugging Face. Available online: https://huggingface.co/sentence-transformers/all-mpnet-base-v2 (accessed on 3 June 2025).
- sentence-transformers/all-distilroberta-v1 · Hugging Face. Available online: https://huggingface.co/sentence-transformers/all-distilroberta-v1 (accessed on 3 June 2025).
- Song, K.; Tan, X.; Qin, T.; Lu, J.; Liu, T.Y. Mpnet: Masked and permuted pre-training for language understanding. Adv. Neural Inf. Process. Syst. 2020, 33, 16857–16867. [Google Scholar]
- Pretrained Models—Sentence-Transformers Documentation. Available online: https://www.sbert.net/docs/pretrained_models.html (accessed on 3 June 2025).
- Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
- Wilcoxon, F. Individual comparisons by ranking methods. Biom. Bull. 1945, 1, 80–83. [Google Scholar] [CrossRef]
- CodeCarbon: Track Carbon Emissions from Machine Learning Training. 2021. Available online: https://github.com/mlco2/codecarbon (accessed on 29 May 2025).
Name | Size | Classes | Social Platform | Class Distribution |
---|---|---|---|---|
IMDb | 50,000 | 2 | IMDb | 25,000 samples per class |
BullyingV3.0 | 7321 | 2 | 2102 bullying samples (28.71%) |
Statistic | IMDb | BullyingV3.0 |
---|---|---|
Mean | 1289.214 | 89 |
Standard Deviation | 974.187 | 36.551 |
Minimum | 32 | 5 |
First Quartile | 691 | 58 |
Second Quartile (Median) | 957 | 90 |
Third Quartile | 1563 | 124 |
Maximum | 13,604 | 144 |
Class | Mean Words | Median Words | Std. Words | Mean Tokens | Median Tokens | Std. Tokens |
---|---|---|---|---|---|---|
Non-Bullying (0) | 10.65 | 10 | 5.00 | 20.49 | 18 | 10.35 |
Bullying (1) | 8.87 | 9 | 4.11 | 15.80 | 16 | 6.04 |
Class | Mean Words | Median Words | Std. Words | Mean Tokens | Median Tokens | Std. Tokens |
---|---|---|---|---|---|---|
Negative (0) | 225.69 | 171 | 162.66 | 254.14 | 192 | 183.35 |
Positive (1) | 231.85 | 171 | 176.84 | 260.04 | 191 | 198.85 |
Model | Without Fine-Tuning | With Fine-Tuning | ||
---|---|---|---|---|
Accuracy | F1 | Accuracy | F1 | |
K-NN | 90.99 | 90.94 | 91.45 | 91.49 |
SVM | 91.98 | 92.00 | 92.03 | 92.05 |
LR | 92.51 | 92.52 | 92.58 | 92.59 |
Classifier | Metric | Dims | PCA | ICA | TSVD | LLE | UMAP | AE | VAE |
---|---|---|---|---|---|---|---|---|---|
K-NN | Accuracy | 300 | 91.03 | 70.94 | 90.99 | 89.74 | 90.16 | 90.80 | 88.33 |
200 | 91.06 | 74.49 | 91.03 | 90.10 | 90.04 | 90.73 | 87.90 | ||
100 | 91.14 | 79.72 | 91.09 | 90.28 | 90.16 | 90.69 | 89.33 | ||
50 | 91.02 | 82.31 | 91.02 | 90.18 | 89.98 | 90.59 | 90.12 | ||
30 | 91.08 | 84.36 | 90.98 | 90.17 | 90.38 | 90.69 | 89.64 | ||
10 | 90.43 | 88.60 | 90.42 | 90.07 | 90.28 | 90.52 | 89.98 | ||
5 | 90.10 | 89.86 | 90.10 | 89.94 | 90.19 | 89.99 | 89.91 | ||
SVM | Accuracy | 300 | 91.78 | 92.14 | 91.59 | 91.19 | 87.45 | 91.29 | 92.45 |
200 | 91.84 | 92.03 | 91.51 | 91.12 | 87.85 | 91.23 | 92.50 | ||
100 | 91.71 | 92.03 | 91.93 | 90.78 | 78.87 | 91.20 | 92.42 | ||
50 | 91.88 | 91.82 | 91.62 | 88.11 | 82.70 | 91.17 | 92.24 | ||
30 | 91.68 | 91.58 | 91.87 | 88.11 | 90.29 | 91.32 | 91.88 | ||
10 | 91.12 | 91.03 | 91.20 | 88.10 | 89.84 | 91.20 | 91.29 | ||
5 | 90.87 | 90.80 | 90.84 | 88.48 | 90.90 | 90.92 | 90.80 | ||
LR | Accuracy | 300 | 92.83 | 92.11 | 92.74 | 91.19 | 91.10 | 91.04 | 92.67 |
200 | 92.60 | 92.04 | 92.56 | 91.11 | 91.07 | 91.04 | 92.63 | ||
100 | 92.54 | 92.03 | 92.50 | 90.78 | 91.07 | 91.06 | 92.46 | ||
50 | 92.35 | 91.79 | 92.32 | 88.07 | 91.04 | 90.92 | 92.29 | ||
30 | 92.03 | 91.48 | 92.05 | 88.07 | 91.04 | 91.08 | 91.91 | ||
10 | 91.39 | 91.04 | 91.40 | 88.08 | 91.08 | 90.85 | 91.30 | ||
5 | 90.90 | 90.79 | 90.89 | 88.55 | 90.93 | 90.49 | 90.84 |
Classifier | Metric | Dims | PCA | ICA | TSVD | LLE | UMAP | AE | VAE |
---|---|---|---|---|---|---|---|---|---|
K-NN | Accuracy | 300 | 91.54 | 73.53 | 91.56 | 89.89 | 90.96 | 91.11 | 88.94 |
200 | 91.62 | 77.74 | 91.63 | 90.43 | 90.80 | 91.28 | 89.62 | ||
100 | 91.56 | 81.97 | 91.56 | 90.76 | 90.86 | 91.12 | 90.38 | ||
50 | 91.66 | 83.93 | 91.71 | 90.59 | 90.78 | 91.09 | 90.91 | ||
30 | 91.57 | 85.21 | 91.72 | 90.58 | 90.82 | 91.02 | 90.51 | ||
10 | 91.23 | 89.36 | 91.24 | 90.66 | 90.99 | 90.91 | 90.57 | ||
5 | 90.54 | 90.44 | 90.55 | 90.43 | 90.77 | 90.62 | 90.86 | ||
SVM | Accuracy | 300 | 92.49 | 92.33 | 92.49 | 91.72 | 90.69 | 91.79 | 92.56 |
200 | 92.07 | 92.26 | 92.30 | 91.67 | 87.30 | 91.61 | 92.76 | ||
100 | 91.82 | 92.28 | 91.77 | 91.46 | 90.69 | 91.66 | 92.82 | ||
50 | 92.21 | 92.11 | 91.79 | 90.58 | 88.09 | 91.73 | 92.56 | ||
30 | 92.22 | 91.88 | 92.21 | 90.59 | 88.23 | 91.62 | 92.29 | ||
10 | 91.74 | 91.64 | 91.63 | 90.58 | 91.43 | 91.37 | 91.84 | ||
5 | 91.55 | 91.59 | 91.59 | 90.51 | 91.52 | 91.23 | 91.53 | ||
LR | Accuracy | 300 | 92.86 | 92.32 | 92.90 | 91.72 | 91.69 | 91.52 | 92.92 |
200 | 92.80 | 92.26 | 92.92 | 91.67 | 91.67 | 91.34 | 92.99 | ||
100 | 92.88 | 92.26 | 92.84 | 91.46 | 91.64 | 91.34 | 92.82 | ||
50 | 92.71 | 92.07 | 92.61 | 90.53 | 91.68 | 91.47 | 92.56 | ||
30 | 92.46 | 91.87 | 92.47 | 90.57 | 91.66 | 91.23 | 92.33 | ||
10 | 93.10 | 92.95 | 93.10 | 92.41 | 92.50 | 92.42 | 93.02 | ||
5 | 92.94 | 92.87 | 92.94 | 92.42 | 92.48 | 92.57 | 92.82 |
Model | Without Fine-Tuning | With Fine-Tuning | ||
---|---|---|---|---|
Accuracy | F1 | Accuracy | F1 | |
K-NN | 84.49 | 70.66 | 87.75 | 75.64 |
SVM | 80.69 | 62.44 | 81.23 | 63.52 |
LR | 82.69 | 64.93 | 82.86 | 65.64 |
Classifier | Metric | Dims | PCA | ICA | TSVD | LLE | UMAP | AE | VAE |
---|---|---|---|---|---|---|---|---|---|
K-NN | Accuracy | 300 | 85.09 | 74.12 | 85.18 | 74.12 | 87.06 | 86.98 | 84.66 |
200 | 85.78 | 74.12 | 85.86 | 74.12 | 87.06 | 86.72 | 85.43 | ||
100 | 86.12 | 74.12 | 85.35 | 74.12 | 87.32 | 87.15 | 86.12 | ||
50 | 86.46 | 74.12 | 86.72 | 74.12 | 87.15 | 87.32 | 86.03 | ||
30 | 87.40 | 74.12 | 87.40 | 74.12 | 87.06 | 87.32 | 87.57 | ||
10 | 86.89 | 74.12 | 86.89 | 74.12 | 87.40 | 86.89 | 86.46 | ||
5 | 86.80 | 74.12 | 86.80 | 74.12 | 86.46 | 86.46 | 86.80 | ||
F1 | 300 | 70.31 | 0.00 | 70.53 | 0.00 | 74.54 | 83.06 | 79.90 | |
200 | 71.48 | 0.00 | 71.70 | 0.00 | 74.71 | 82.89 | 80.81 | ||
100 | 72.82 | 0.00 | 71.36 | 0.00 | 75.17 | 83.35 | 81.79 | ||
50 | 73.49 | 0.00 | 73.95 | 0.00 | 74.75 | 83.68 | 81.53 | ||
30 | 75.46 | 0.00 | 75.46 | 0.00 | 74.45 | 83.54 | 83.75 | ||
10 | 74.02 | 0.00 | 74.02 | 0.00 | 75.38 | 83.08 | 82.24 | ||
5 | 74.07 | 0.00 | 74.07 | 0.00 | 73.04 | 82.54 | 82.61 | ||
SVM | Accuracy | 300 | 83.00 | 83.29 | 83.12 | 83.38 | 76.58 | 83.86 | 82.03 |
200 | 83.32 | 82.95 | 81.81 | 83.38 | 80.35 | 85.23 | 82.23 | ||
100 | 85.18 | 82.95 | 83.80 | 81.66 | 77.98 | 85.43 | 83.98 | ||
50 | 83.55 | 82.35 | 82.92 | 80.72 | 68.24 | 85.18 | 84.26 | ||
30 | 84.23 | 82.35 | 84.43 | 79.86 | 82.46 | 83.95 | 84.83 | ||
10 | 84.92 | 81.92 | 84.92 | 74.12 | 82.23 | 84.92 | 84.98 | ||
5 | 84.12 | 80.55 | 84.38 | 74.12 | 80.38 | 83.52 | 85.72 | ||
F1 | 300 | 65.78 | 61.69 | 65.95 | 62.40 | 24.64 | 78.60 | 75.43 | |
200 | 67.88 | 60.44 | 63.16 | 62.11 | 47.75 | 80.62 | 76.02 | ||
100 | 71.61 | 58.80 | 67.83 | 56.85 | 38.71 | 80.69 | 78.49 | ||
50 | 65.74 | 56.72 | 66.52 | 54.55 | 40.54 | 80.36 | 79.09 | ||
30 | 67.20 | 56.36 | 70.29 | 49.68 | 66.98 | 78.69 | 80.06 | ||
10 | 71.01 | 54.62 | 69.43 | 0.00 | 68.95 | 80.54 | 80.49 | ||
5 | 68.44 | 48.76 | 69.04 | 0.00 | 61.07 | 77.91 | 81.49 | ||
LR | Accuracy | 300 | 83.29 | 74.12 | 82.78 | 74.12 | 84.23 | 83.98 | 83.38 |
200 | 84.66 | 74.12 | 83.98 | 74.12 | 84.23 | 83.80 | 84.23 | ||
100 | 85.69 | 74.12 | 84.06 | 74.12 | 84.23 | 84.40 | 84.75 | ||
50 | 85.43 | 74.12 | 85.18 | 74.12 | 84.66 | 83.89 | 84.75 | ||
30 | 84.75 | 74.12 | 84.83 | 74.12 | 84.15 | 83.12 | 84.92 | ||
10 | 84.75 | 74.12 | 84.75 | 74.12 | 81.92 | 84.23 | 85.52 | ||
5 | 84.15 | 74.12 | 84.15 | 74.12 | 83.12 | 82.69 | 85.43 | ||
F1 | 300 | 66.44 | 0.00 | 65.04 | 0.00 | 70.98 | 78.32 | 77.95 | |
200 | 69.40 | 0.00 | 68.03 | 0.00 | 70.89 | 78.09 | 78.99 | ||
100 | 71.84 | 0.00 | 67.82 | 0.00 | 71.16 | 79.03 | 79.77 | ||
50 | 71.28 | 0.00 | 70.92 | 0.00 | 71.99 | 78.18 | 79.58 | ||
30 | 69.93 | 0.00 | 69.95 | 0.00 | 71.05 | 77.00 | 80.04 | ||
10 | 69.83 | 0.00 | 69.83 | 0.00 | 65.80 | 79.45 | 80.98 | ||
5 | 67.71 | 0.00 | 67.71 | 0.00 | 67.55 | 76.61 | 80.93 |
Classifier | Metric | Dims | PCA | ICA | TSVD | LLE | UMAP | AE | VAE |
---|---|---|---|---|---|---|---|---|---|
K-NN | Accuracy | 300 | 83.29 | 74.12 | 82.78 | 74.12 | 84.23 | 83.98 | 83.38 |
200 | 84.66 | 74.12 | 83.98 | 74.12 | 84.23 | 83.80 | 84.23 | ||
100 | 85.69 | 74.12 | 84.06 | 74.12 | 84.23 | 84.40 | 84.75 | ||
50 | 85.43 | 74.12 | 85.18 | 74.12 | 84.66 | 83.89 | 84.75 | ||
30 | 84.75 | 74.12 | 84.83 | 74.12 | 84.15 | 83.12 | 84.92 | ||
10 | 84.75 | 74.12 | 84.75 | 74.12 | 81.92 | 84.23 | 85.52 | ||
5 | 84.15 | 74.12 | 84.15 | 74.12 | 83.12 | 82.69 | 85.43 | ||
F1 | 300 | 66.44 | 0.00 | 65.04 | 0.00 | 70.98 | 78.32 | 77.95 | |
200 | 69.40 | 0.00 | 68.03 | 0.00 | 70.89 | 78.09 | 78.99 | ||
100 | 71.84 | 0.00 | 67.82 | 0.00 | 71.16 | 79.03 | 79.77 | ||
50 | 71.28 | 0.00 | 70.92 | 0.00 | 71.99 | 78.18 | 79.58 | ||
30 | 69.93 | 0.00 | 69.95 | 0.00 | 71.05 | 77.00 | 80.04 | ||
10 | 69.83 | 0.00 | 69.83 | 0.00 | 65.80 | 79.45 | 80.98 | ||
5 | 67.71 | 0.00 | 67.71 | 0.00 | 67.55 | 76.61 | 80.93 | ||
SVM | Accuracy | 300 | 83.32 | 85.00 | 83.38 | 86.03 | 86.80 | 87.55 | 83.29 |
200 | 84.89 | 84.49 | 84.40 | 85.60 | 84.23 | 87.09 | 84.80 | ||
100 | 84.60 | 85.69 | 84.86 | 85.09 | 70.95 | 87.20 | 85.63 | ||
50 | 85.00 | 86.29 | 85.43 | 83.03 | 83.83 | 87.12 | 85.98 | ||
30 | 85.18 | 86.20 | 85.32 | 83.55 | 83.15 | 87.15 | 86.52 | ||
10 | 85.12 | 85.18 | 86.35 | 73.95 | 85.15 | 86.40 | 86.80 | ||
5 | 87.03 | 85.43 | 85.12 | 74.12 | 85.75 | 86.20 | 87.23 | ||
F1 | 300 | 64.88 | 66.02 | 66.30 | 69.42 | 73.95 | 83.76 | 77.78 | |
200 | 68.80 | 65.52 | 68.78 | 68.66 | 63.82 | 83.23 | 79.94 | ||
100 | 69.17 | 68.55 | 69.86 | 67.29 | 42.25 | 83.35 | 81.10 | ||
50 | 69.83 | 69.70 | 71.72 | 61.78 | 61.44 | 83.23 | 81.78 | ||
30 | 68.10 | 69.57 | 69.49 | 63.36 | 59.96 | 83.44 | 82.30 | ||
10 | 67.46 | 66.67 | 73.43 | 3.18 | 68.48 | 82.49 | 82.88 | ||
5 | 74.80 | 67.18 | 67.61 | 0.00 | 72.76 | 82.33 | 83.54 | ||
LR | Accuracy | 300 | 85.09 | 74.12 | 85.18 | 74.12 | 87.06 | 86.98 | 84.66 |
200 | 85.78 | 74.12 | 85.86 | 74.12 | 87.06 | 86.72 | 85.43 | ||
100 | 86.12 | 74.12 | 85.35 | 74.12 | 87.32 | 87.15 | 86.12 | ||
50 | 86.46 | 74.12 | 86.72 | 74.12 | 87.15 | 87.32 | 86.03 | ||
30 | 87.40 | 74.12 | 87.40 | 74.12 | 87.06 | 87.32 | 87.57 | ||
10 | 86.89 | 74.12 | 86.89 | 74.12 | 87.40 | 86.89 | 86.46 | ||
5 | 86.80 | 74.12 | 86.80 | 74.12 | 86.46 | 86.46 | 86.80 | ||
F1 | 300 | 70.31 | 0.00 | 70.53 | 0.00 | 74.54 | 83.06 | 79.90 | |
200 | 71.48 | 0.00 | 71.70 | 0.00 | 74.71 | 82.89 | 80.81 | ||
100 | 72.82 | 0.00 | 71.36 | 0.00 | 75.17 | 83.35 | 81.79 | ||
50 | 73.49 | 0.00 | 73.95 | 0.00 | 74.75 | 83.68 | 81.53 | ||
30 | 75.46 | 0.00 | 75.46 | 0.00 | 74.45 | 83.54 | 83.75 | ||
10 | 74.02 | 0.00 | 74.02 | 0.00 | 75.38 | 83.08 | 82.24 | ||
5 | 74.07 | 0.00 | 74.07 | 0.00 | 73.04 | 82.54 | 82.61 |
Dims | Algorithm | No Red. | PCA | ICA | TSVD | LLE | UMAP | AE | VAE |
---|---|---|---|---|---|---|---|---|---|
All dims | KNN | 0.1122 | - | - | - | - | - | - | - |
SVM | 6.5781 | - | - | - | - | - | - | - | |
LR | 0.2877 | - | - | - | - | - | - | - | |
300 | KNN | - | 0.2375 | 5.1208 | 0.2660 | 2.3393 | 14.5920 | 0.0550 | 0.0375 |
SVM | - | 0.3505 | 5.2034 | 0.4411 | 2.3741 | 14.7715 | 1.8930 | 0.6578 | |
LR | - | 0.2676 | 5.1130 | 0.3272 | 2.3298 | 14.6967 | 0.1406 | 0.1491 | |
200 | KNN | - | 0.1881 | 2.6335 | 0.1858 | 2.1614 | 12.8356 | 0.0546 | 0.0405 |
SVM | - | 0.2694 | 2.6579 | 0.2864 | 2.1789 | 13.0293 | 2.1584 | 0.2703 | |
LR | - | 0.2116 | 2.6270 | 0.2206 | 2.1571 | 12.9681 | 0.1590 | 0.0923 | |
100 | KNN | - | 0.1414 | 1.0178 | 0.1333 | 1.8668 | 9.5506 | 0.0559 | 0.0299 |
SVM | - | 0.1672 | 1.0316 | 0.1594 | 1.8751 | 9.6212 | 1.5547 | 0.0858 | |
LR | - | 0.1420 | 1.0239 | 0.1405 | 1.8616 | 9.5952 | 0.1611 | 0.0558 | |
50 | KNN | - | 0.0962 | 0.4269 | 0.0633 | 1.6540 | 10.2038 | 0.0566 | 0.0197 |
SVM | - | 0.1012 | 0.4283 | 0.0733 | 1.6520 | 10.2410 | 1.9537 | 0.0278 | |
LR | - | 0.0939 | 0.4234 | 0.0645 | 1.6504 | 10.2667 | 0.1876 | 0.0170 | |
30 | KNN | - | 0.1045 | 0.3777 | 0.0558 | 2.0877 | 10.2379 | 0.0586 | 0.0451 |
SVM | - | 0.1012 | 0.3737 | 0.0560 | 2.0834 | 10.2539 | 1.5280 | 0.0184 | |
LR | - | 0.1132 | 0.3728 | 0.0578 | 2.0828 | 10.2533 | 0.1632 | 0.0178 | |
10 | KNN | - | 0.1315 | 0.3699 | 0.0880 | 2.6387 | 7.8423 | 0.0678 | 0.0980 |
SVM | - | 0.0542 | 0.2864 | 0.0343 | 2.5491 | 7.7853 | 1.5084 | 0.0074 | |
LR | - | 0.0582 | 0.2881 | 0.0363 | 2.5526 | 7.8052 | 0.2418 | 0.0170 | |
5 | KNN | - | 0.1498 | 0.3740 | 0.0993 | 1.2698 | 19.1309 | 0.0653 | 0.0573 |
SVM | - | 0.0660 | 0.3041 | 0.0357 | 1.2204 | 19.0851 | 1.2834 | 0.0064 | |
LR | - | 0.7534 | 0.3064 | 0.0400 | 1.2235 | 19.0914 | 0.1730 | 0.0219 |
Dims | Algorithm | No Red. | PCA | ICA | TSVD | LLE | UMAP | AE | VAE |
---|---|---|---|---|---|---|---|---|---|
All dims | KNN | 0.000131 | - | - | - | - | - | - | |
SVM | 0.007633 | - | - | - | - | - | - | ||
LR | 0.000335 | - | - | - | - | - | - | ||
300 | KNN | - | 0.000918 | 0.012234 | 0.001150 | 0.005690 | 0.034384 | 0.000065 | 0.000044 |
SVM | - | 0.001122 | 0.012425 | 0.001435 | 0.005799 | 0.034654 | 0.002197 | 0.000510 | |
LR | - | 0.001225 | 0.012513 | 0.001594 | 0.005864 | 0.034833 | 0.000164 | 0.000174 | |
200 | KNN | - | 0.000764 | 0.006354 | 0.000875 | 0.005244 | 0.030273 | 0.000064 | 0.000048 |
SVM | - | 0.000924 | 0.006448 | 0.001081 | 0.005329 | 0.030552 | 0.002505 | 0.000315 | |
LR | - | 0.001015 | 0.006505 | 0.001209 | 0.005392 | 0.030763 | 0.000185 | 0.000108 | |
100 | KNN | - | 0.000556 | 0.002664 | 0.000547 | 0.004522 | 0.022547 | 0.000066 | 0.000035 |
SVM | - | 0.000653 | 0.002767 | 0.000641 | 0.004587 | 0.022686 | 0.001805 | 0.000100 | |
LR | - | 0.000718 | 0.002849 | 0.000711 | 0.004643 | 0.022805 | 0.000188 | 0.000066 | |
50 | KNN | - | 0.000410 | 0.001180 | 0.000337 | 0.004026 | 0.024046 | 0.000066 | 0.000024 |
SVM | - | 0.000481 | 0.001239 | 0.000400 | 0.004075 | 0.024151 | 0.002268 | 0.000033 | |
LR | - | 0.000542 | 0.001292 | 0.000451 | 0.004126 | 0.024291 | 0.001970 | 0.000020 | |
30 | KNN | - | 0.000412 | 0.001045 | 0.000358 | 0.005058 | 0.023976 | 0.000069 | 0.000013 |
SVM | - | 0.000465 | 0.001095 | 0.000428 | 0.005118 | 0.024050 | 0.001774 | 0.000016 | |
LR | - | 0.000526 | 0.001140 | 0.000497 | 0.005180 | 0.024121 | 0.000190 | 0.000022 | |
10 | KNN | - | 0.000467 | 0.001001 | 0.000364 | 0.006436 | 0.018336 | 0.000079 | 0.000115 |
SVM | - | 0.000505 | 0.001035 | 0.000403 | 0.006504 | 0.018370 | 0.001751 | 0.000009 | |
LR | - | 0.000551 | 0.001071 | 0.000442 | 0.006595 | 0.018427 | 0.000281 | 0.000016 | |
5 | KNN | - | 0.000579 | 0.001023 | 99.345446 | 0.003158 | 0.044515 | 0.000077 | 0.000067 |
SVM | - | 0.000622 | 0.001072 | 35.697301 | 0.003205 | 0.044549 | 0.001490 | 0.000004 | |
LR | - | 0.000668 | 0.001115 | 40.028334 | 0.003255 | 0.044590 | 0.000202 | 0.000010 |
Dims | Algorithm | No Reduction | PCA | ICA | TSVD | LLE | UMAP | AE | VAE |
---|---|---|---|---|---|---|---|---|---|
All dims | KNN | 8.1208 | - | - | - | - | - | - | - |
SVM | 6.6779 | - | - | - | - | - | - | - | |
LR | 1.7711 | - | - | - | - | - | - | - | |
300 | KNN | - | 3.3185 | 18.0089 | 3.9497 | 640.9010 | 58.9425 | 5.8711 | 2.8873 |
SVM | - | 1.1295 | 15.5802 | 2.0608 | 638.3636 | 57.7811 | 3.2424 | 4.9062 | |
LR | - | 0.6804 | 15.2336 | 1.6244 | 638.1153 | 57.0314 | 0.8763 | 0.5770 | |
200 | KNN | - | 2.5147 | 10.6664 | 2.6874 | 723.6974 | 45.4347 | 5.8749 | 1.8997 |
SVM | - | 0.7724 | 8.9030 | 1.3882 | 721.6401 | 44.7845 | 3.8942 | 1.6256 | |
LR | - | 0.4714 | 8.6735 | 1.0941 | 721.4694 | 43.8987 | 0.7567 | 0.3423 | |
100 | KNN | - | 1.7117 | 5.0284 | 2.0065 | 766.1196 | 30.7187 | 6.0608 | 1.4062 |
SVM | - | 0.5423 | 4.0499 | 0.8980 | 765.1256 | 30.1565 | 3.5009 | 0.4789 | |
LR | - | 0.3981 | 3.9087 | 0.7337 | 765.0445 | 29.7182 | 0.9380 | 0.1253 | |
50 | KNN | - | 1.3405 | 3.7035 | 1.1829 | 706.0574 | 25.0885 | 6.2141 | 0.8375 |
SVM | - | 0.4213 | 2.9186 | 0.5163 | 705.2746 | 24.6350 | 4.3489 | 0.1271 | |
LR | - | 0.3264 | 2.8494 | 0.4419 | 705.1999 | 24.8079 | 0.9799 | 0.0542 | |
30 | KNN | - | 1.3077 | 4.9349 | 1.0649 | 715.8028 | 23.1674 | 6.7874 | 0.7683 |
SVM | - | 0.3767 | 4.0024 | 0.4025 | 715.1210 | 22.6796 | 4.7007 | 0.0752 | |
LR | - | 0.3046 | 3.9717 | 0.3659 | 715.0768 | 22.4977 | 1.0499 | 0.0348 | |
10 | KNN | - | 3.9415 | 9.0696 | 3.6465 | 797.4667 | 19.4872 | 6.0613 | 5.3522 |
SVM | - | 0.1942 | 2.8076 | 0.2275 | 796.2334 | 18.2937 | 4.2129 | 0.0284 | |
LR | - | 0.1916 | 2.8012 | 0.2246 | 796.2219 | 18.3165 | 0.9183 | 0.0243 | |
5 | KNN | - | 1.2712 | 3.5514 | 1.4474 | 288.9107 | 46.4812 | 6.7752 | 1.3948 |
SVM | - | 0.2751 | 2.2860 | 0.2257 | 288.0114 | 45.0791 | 10.0579 | 0.0206 | |
LR | - | 0.2811 | 2.2815 | 0.2245 | 288.0086 | 45.0876 | 1.6542 | 0.0198 |
Dims | Algorithm | No Reduction | PCA | ICA | TSVD | LLE | UMAP | AE | VAE |
---|---|---|---|---|---|---|---|---|---|
All dims | KNN | 0.042906 | - | - | - | - | - | - | - |
SVM | 0.043543 | - | - | - | - | - | - | - | |
LR | 0.043796 | - | - | - | - | - | - | - | |
300 | KNN | - | 0.009368 | 0.042906 | 0.010773 | 1.487585 | 1.487585 | 0.140200 | 0.002241 |
SVM | - | 0.010381 | 0.043543 | 0.011774 | 1.488009 | 1.488009 | 0.142522 | 0.002522 | |
LR | - | 0.010908 | 0.043796 | 0.012279 | 1.488143 | 1.488143 | 0.143989 | 0.000350 | |
200 | KNN | - | 0.006926 | 0.025502 | 0.007325 | 1.679929 | 1.679929 | 0.107997 | 0.004590 |
SVM | - | 0.007585 | 0.025972 | 0.007973 | 1.680336 | 1.680336 | 0.109658 | 0.003002 | |
LR | - | 0.007901 | 0.026177 | 0.008291 | 1.680484 | 1.680484 | 0.110326 | 0.000282 | |
100 | KNN | - | 0.004711 | 0.012305 | 0.005255 | 1.778402 | 1.778402 | 0.073428 | 0.007032 |
SVM | - | 0.005074 | 0.012605 | 0.005653 | 1.778592 | 1.778592 | 0.074597 | 0.004063 | |
LR | - | 0.005261 | 0.012735 | 0.005857 | 1.778694 | 1.778694 | 0.075251 | 0.001089 | |
50 | KNN | - | 0.003778 | 0.009112 | 0.003362 | 1.639152 | 1.639152 | 0.059961 | 0.007211 |
SVM | - | 0.004040 | 0.009313 | 0.003608 | 1.639340 | 1.639340 | 0.060681 | 0.005047 | |
LR | - | 0.004188 | 0.009435 | 0.003763 | 1.639449 | 1.639449 | 0.061619 | 0.001138 | |
30 | KNN | - | 0.003809 | 0.012082 | 0.002975 | 1.661700 | 1.661700 | 0.054487 | 0.007876 |
SVM | - | 0.004047 | 0.012243 | 0.003162 | 1.661868 | 1.661868 | 0.054872 | 0.005455 | |
LR | - | 0.004203 | 0.012375 | 0.003301 | 1.661981 | 1.661981 | 0.055044 | 0.001219 | |
10 | KNN | - | 0.009230 | 0.021362 | 0.008341 | 1.851171 | 1.851171 | 0.045616 | 0.007034 |
SVM | - | 0.009327 | 0.021463 | 0.008436 | 1.851276 | 1.851276 | 0.045723 | 0.004889 | |
LR | - | 0.009421 | 0.021573 | 0.008530 | 1.851384 | 1.851384 | 0.045858 | 0.001066 | |
5 | KNN | - | 0.003302 | 0.008536 | 0.003812 | 0.670890 | 0.670890 | 0.108215 | 0.007861 |
SVM | - | 0.003380 | 0.008628 | 0.003906 | 0.670975 | 0.670975 | 0.108334 | 0.011671 | |
LR | - | 0.003463 | 0.008708 | 0.004001 | 0.671065 | 0.671065 | 0.108464 | 0.001920 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Diaz-Garcia, J.A.; Morales-Garzón, A.; Gutiérrez-Batista, K.; Martin-Bautista, M.J. Optimising Text Classification in Social Networks via Deep Learning-Based Dimensionality Reduction. Electronics 2025, 14, 3426. https://doi.org/10.3390/electronics14173426
Diaz-Garcia JA, Morales-Garzón A, Gutiérrez-Batista K, Martin-Bautista MJ. Optimising Text Classification in Social Networks via Deep Learning-Based Dimensionality Reduction. Electronics. 2025; 14(17):3426. https://doi.org/10.3390/electronics14173426
Chicago/Turabian StyleDiaz-Garcia, Jose A., Andrea Morales-Garzón, Karel Gutiérrez-Batista, and Maria J. Martin-Bautista. 2025. "Optimising Text Classification in Social Networks via Deep Learning-Based Dimensionality Reduction" Electronics 14, no. 17: 3426. https://doi.org/10.3390/electronics14173426
APA StyleDiaz-Garcia, J. A., Morales-Garzón, A., Gutiérrez-Batista, K., & Martin-Bautista, M. J. (2025). Optimising Text Classification in Social Networks via Deep Learning-Based Dimensionality Reduction. Electronics, 14(17), 3426. https://doi.org/10.3390/electronics14173426