Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (5)

Search Parameters:
Keywords = Tsetlin Machines

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
13 pages, 548 KiB  
Article
A Novel Tsetlin Machine with Enhanced Generalization
by Usman Anjum and Justin Zhan
Electronics 2024, 13(19), 3825; https://doi.org/10.3390/electronics13193825 - 27 Sep 2024
Cited by 2 | Viewed by 1965
Abstract
The Tsetlin Machine (TM) is a novel machine learning approach that implements propositional logic to perform various tasks such as classification and regression. The TM not only achieves competitive accuracy in these tasks but also provides results that are explainable and easy to [...] Read more.
The Tsetlin Machine (TM) is a novel machine learning approach that implements propositional logic to perform various tasks such as classification and regression. The TM not only achieves competitive accuracy in these tasks but also provides results that are explainable and easy to implement using simple hardware. The TM learns using clauses based on the features of the data, and final classification is done using a combination of these clauses. In this paper, we propose the novel idea of adding regularizers to the TM, referred to as Regularized TM (RegTM), to improve generalization. Regularizers have been widely used in machine learning to enhance accuracy. We explore different regularization strategies and their influence on performance. We demonstrate the feasibility of our methodology through various experiments on benchmark datasets. Full article
Show Figures

Figure 1

18 pages, 958 KiB  
Article
Tsetlin Machine for Sentiment Analysis and Spam Review Detection in Chinese
by Xuanyu Zhang, Hao Zhou, Ke Yu, Xiaofei Wu and Anis Yazidi
Algorithms 2023, 16(2), 93; https://doi.org/10.3390/a16020093 - 8 Feb 2023
Cited by 2 | Viewed by 3082
Abstract
In Natural Language Processing (NLP), deep-learning neural networks have superior performance but pose transparency and explainability barriers, due to their black box nature, and, thus, there is lack of trustworthiness. On the other hand, classical machine learning techniques are intuitive and easy to [...] Read more.
In Natural Language Processing (NLP), deep-learning neural networks have superior performance but pose transparency and explainability barriers, due to their black box nature, and, thus, there is lack of trustworthiness. On the other hand, classical machine learning techniques are intuitive and easy to understand but often cannot perform satisfactorily. Fortunately, many research studies have recently indicated that the newly introduced model, Tsetlin Machine (TM), has reliable performance and, at the same time, enjoys human-level interpretability by nature, which is a promising approach to trade off effectiveness and interpretability. However, nearly all of the related works so far have concentrated on the English language, while research on other languages is relatively scarce. So, we propose a novel method, based on the TM model, in which the learning process is transparent and easily-understandable for Chinese NLP tasks. Our model can learn semantic information in the Chinese language by clauses. For evaluation, we conducted experiments in two domains, namely sentiment analysis and spam review detection. The experimental results showed thatm for both domains, our method could provide higher accuracy and a higher F1 score than complex, but non-transparent, deep-learning models, such as BERT and ERINE. Full article
Show Figures

Figure 1

13 pages, 808 KiB  
Article
Enhancing Attention’s Explanation Using Interpretable Tsetlin Machine
by Rohan Kumar Yadav and Dragoş Constantin Nicolae
Algorithms 2022, 15(5), 143; https://doi.org/10.3390/a15050143 - 22 Apr 2022
Cited by 2 | Viewed by 3109
Abstract
Explainability is one of the key factors in Natural Language Processing (NLP) specially for legal documents, medical diagnosis, and clinical text. Attention mechanism has been a popular choice for such explainability recently by estimating the relative importance of input units. Recent research has [...] Read more.
Explainability is one of the key factors in Natural Language Processing (NLP) specially for legal documents, medical diagnosis, and clinical text. Attention mechanism has been a popular choice for such explainability recently by estimating the relative importance of input units. Recent research has revealed, however, that such processes tend to misidentify irrelevant input units when explaining them. This is due to the fact that language representation layers are initialized by pre-trained word embedding that is not context-dependent. Such a lack of context-dependent knowledge in the initial layer makes it difficult for the model to concentrate on the important aspects of input. Usually, this does not impact the performance of the model, but the explainability differs from human understanding. Hence, in this paper, we propose an ensemble method to use logic-based information from the Tsetlin Machine to embed it into the initial representation layer in the neural network to enhance the model in terms of explainability. We obtain the global clause score for each word in the vocabulary and feed it into the neural network layer as context-dependent information. Our experiments show that the ensemble method enhances the explainability of the attention layer without sacrificing any performance of the model and even outperforming in some datasets. Full article
Show Figures

Figure 1

25 pages, 611 KiB  
Article
Adaptive Sparse Representation of Continuous Input for Tsetlin Machines Based on Stochastic Searching on the Line
by Kuruge Darshana Abeyrathna, Ole-Christoffer Granmo and Morten Goodwin
Electronics 2021, 10(17), 2107; https://doi.org/10.3390/electronics10172107 - 30 Aug 2021
Cited by 4 | Viewed by 3222
Abstract
This paper introduces a novel approach to representing continuous inputs in Tsetlin Machines (TMs). Instead of using one Tsetlin Automaton (TA) for every unique threshold found when Booleanizing continuous input, we employ two Stochastic Searching on the Line (SSL) automata to learn discriminative [...] Read more.
This paper introduces a novel approach to representing continuous inputs in Tsetlin Machines (TMs). Instead of using one Tsetlin Automaton (TA) for every unique threshold found when Booleanizing continuous input, we employ two Stochastic Searching on the Line (SSL) automata to learn discriminative lower and upper bounds. The two resulting Boolean features are adapted to the rest of the clause by equipping each clause with its own team of SSLs, which update the bounds during the learning process. Two standard TAs finally decide whether to include the resulting features as part of the clause. In this way, only four automata altogether represent one continuous feature (instead of potentially hundreds of them). We evaluate the performance of the new scheme empirically using five datasets, along with a study of interpretability. On average, TMs with SSL feature representation use 4.3 times fewer literals than the TM with static threshold-based features. Furthermore, in terms of average memory usage and F1-Score, our approach outperforms simple Multi-Layered Artificial Neural Networks, Decision Trees, Support Vector Machines, K-Nearest Neighbor, Random Forest, Gradient Boosted Trees (XGBoost), and Explainable Boosting Machines (EBMs), as well as the standard and real-value weighted TMs. Our approach further outperforms Neural Additive Models on Fraud Detection and StructureBoost on CA-58 in terms of the Area Under Curve while performing competitively on COMPAS. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

24 pages, 2009 KiB  
Article
Low-Power Audio Keyword Spotting Using Tsetlin Machines
by Jie Lei, Tousif Rahman, Rishad Shafik, Adrian Wheeldon, Alex Yakovlev, Ole-Christoffer Granmo, Fahim Kawsar and Akhil Mathur
J. Low Power Electron. Appl. 2021, 11(2), 18; https://doi.org/10.3390/jlpea11020018 - 9 Apr 2021
Cited by 38 | Viewed by 7476
Abstract
The emergence of artificial intelligence (AI) driven keyword spotting (KWS) technologies has revolutionized human to machine interaction. Yet, the challenge of end-to-end energy efficiency, memory footprint and system complexity of current neural network (NN) powered AI-KWS pipelines has remained ever present. This paper [...] Read more.
The emergence of artificial intelligence (AI) driven keyword spotting (KWS) technologies has revolutionized human to machine interaction. Yet, the challenge of end-to-end energy efficiency, memory footprint and system complexity of current neural network (NN) powered AI-KWS pipelines has remained ever present. This paper evaluates KWS utilizing a learning automata powered machine learning algorithm called the Tsetlin Machine (TM). Through significant reduction in parameter requirements and choosing logic over arithmetic-based processing, the TM offers new opportunities for low-power KWS while maintaining high learning efficacy. In this paper, we explore a TM-based keyword spotting (KWS) pipeline to demonstrate low complexity with faster rate of convergence compared to NNs. Further, we investigate the scalability with increasing keywords and explore the potential for enabling low-power on-chip KWS. Full article
(This article belongs to the Special Issue Artificial Intelligence of Things (AIoT))
Show Figures

Figure 1

Back to TopTop