Tsetlin Machines

The learning engine at the heart of Logic-Based Networks. Tsetlin automata assist the deep learning capabilities that make LBNs fast, efficient, and explainable. They're a fundamentally different path to artificial intelligence.

An engine for deep learning

Logic learned, not calculated

Different intelligence demands different machinery. Where neural networks learn through vast numerical computation — adjusting millions of floating-point weights across layers of matrix multiplication — Tsetlin Machines learn through logic. They construct rule-based propositions from data, arriving at decisions that are transparent, deterministic, and remarkably efficient.

At the core of every Logic-Based Network trained on Literal Labs' platform sits a Tsetlin Machine. It is the learning architecture that gives LBNs their defining characteristics: inference speeds over 50× faster than neural networks, energy consumption reduced by similar orders of magnitude, and outputs you can trace, interrogate, and trust. Tsetlin automata — the atomic units of this architecture — configure logical connexions between input data and classification decisions, replacing opaque numerical optimisation with structured, interpretable logic.

The result is AI that does not merely perform. It explains.

Tsetlin Machine performance

Inference speed
54× faster
Inference than neural networks
Energy efficiency
52× less energy
Measured per output
Accuracy
±2% accuracy
Of comparable neural network models
Explainability
Logically explainable
Transparent, traceable, deterministic

These are not projections. They are benchmarked results from Tsetlin Machine models trained on Literal Labs' platform, on open-source datasets, and running on standard CPUs and microcontrollers.

Explore the benchmarks →

The breakthrough

Next Generation AI

Patterns learned logically. The breakthrough algorithm combining Tsetlin automata with propositional logic was originally published in 2018 by Ole-Christoffer Granmo, chair of Literal Labs' Technical Steering Committee and a professor at Norway's University of Agder. Its operation was first demonstrated in image recognition — constructing logical propositions, known as clauses, from literals whose configuration connexions are controlled by Tsetlin automata.

The combination is potent. Tsetlin automata and propositional logic together yield a computational model for machine learning that is both energy-efficient and high-performing. Complex system behaviour is modelled through teams or collectives of automata, reaching optimal decisions with greater reliability and redundancy whilst operating on principles of statistical optimality.

Training a Tsetlin Machine is an exercise in disciplined evolution. Each automaton progresses through a linear sequence of states, where each state represents the automaton's confidence in performing its action. Actions are associated with two subsets of states — one for switching a connexion between an input literal and a clause ON, the other for switching it OFF. Simple transitions between states reward or penalise the automaton's actions, functioning somewhat akin to weights in neural networks. The distinction is decisive: unlike multiplication-heavy neural network weights, Tsetlin automata "weights" are simple logic signals that configure input literals within clauses.

Building on the breakthrough

Literal Labs co-founders Alex Yakovlev and Rishad Shafik, together with their team at Newcastle University, have extended Tsetlin Machine capabilities through new data representation techniques — including booleanisation and binarisation — alongside parallelisation and compression methods based on indexing input literals, tiled architectures, and hardware-software codesign of machine learning systems.

These are not marginal refinements. The combinations amplify Tsetlin Machine advantages by orders of magnitude: up to 1,000× faster inferencing and orders of magnitude greater energy savings compared with neural networks. The team has also brought a new level of understanding to the dynamics of machine learning, visualising the learning processes and identifying analytical characteristics of Tsetlin Machine hyperparameters — such as thresholds on clause voting and feedback activation — that illuminate what was previously opaque.

A diagram illustrating how the feedback system works in a Tsetlin machine. Given an observation (training data), the Tsetlin machine decides whether a literal needs to be memorised or forgotten within the resulting model.
A diagram illustrating how the feedback system of a Tsetlin machine. Given an observation (training data), the Tsetlin machine decides whether a literal needs to be memorised. From tsetlinmachine.org
Background

History of the Tsetlin Machine

Every technology has its origin story. The Tsetlin Machine's begins in the Soviet Union, over sixty years ago.

Mikhail Tsetlin was a visionary Soviet mathematician who, in the 1960s, explored a radically different path to artificial intelligence. Rather than mimicking biological neurons, Tsetlin's approach was rooted in learning automata and game theory. He recognised that logic — expressed through what we now call Tsetlin automata — could classify data more efficiently than the numerical methods pursued elsewhere, forging a new direction in AI research.

Together with Victor Varshavsky — advisor to Literal Labs' co-founder Alex Yakovlev — Tsetlin developed theories, algorithms, and applications that solved problems across fields from engineering to economics, sociology and medicine. Despite his early death in 1966 at just forty-two, the research first spurred by Tsetlin branched into various fields, including control systems and circuit design. But the AI element of his approach — the part that mattered most — lay dormant for decades. Until now.

Michael Lvovitch Tsetlin photographed with Victor Varshavsky in 1961
Mikhail Tsetlin (1924-1966), the creator of learning automata theory, and Victor Varshavsky (1933-2005) in 1961 planning the school-seminar on automata - Varshavsky was Alex Yakovlev's advisor
L Zadeh, J McCarthy, V I Varshavsky, and D A Pospelov photographed in Leningrad in 1977
The International Workshop on Artificial Intelligence in Repino, near Leningrad (April 18 - 24, 1977). From right to left: L. Zadeh is participating in a discussion; J. McCarthy, the computer scientist known as the father of AI and Turing Award winner; V. I. Varshavsky, the Soviet classic in the field of collective behaviour of automata; D. A. Pospelov, the founder of AI in the Soviet Union.
V I Varshavsky seminar in the USSR in the 1980s featuring Literal Labs co-founder Alex Yakovlev
During one of Varshavsky's seminars in Leningrad in the 1980s (Varshavsky standing, Alex Yakovlev sitting in front of him).

How Tsetlin Machines work

A Tsetlin Machine is a field of machine learning algorithms, and associated computational architectures, that uses the principles of learning automata — called Tsetlin automata — and game theory to create logic propositions for the classification of data obtained from the environment surrounding the machine. The automata configure connexions between input literals, representing the features within input data, and propositions used to produce classification decisions. On the basis of whether those decisions prove correct or erroneous, the feedback logic calculates penalties and rewards to the automata.

A simple automata as detailed by Michael Tsetlin in his book 'Automation theory and modeling of biological systems'
A diagram illustrating the feedback system of a Tsetlin Machine from Mikhail Tsetlin's book Automation theory and modeling of biological systems, Volume 102. Given an observation (training data), the Tsetlin Machine decides whether a literal needs to be memorised or forgotten within the resulting model.
Literal Labs logo mark

Train Tsetlin Machines

Theory is persuasive. Results are better. Literal Labs' training platform puts Tsetlin Machine technology directly into your hands and your devices.

Request early access

Benchmarking Tsetlin machines

Literal Labs' performance

Curious how Tsetlin Machines compare against other AI technologies, including neural networks? Explore the benchmarks yourself — or delve into the published research papers behind the numbers.

Frequently asked questions

How do Tsetlin Machines compare to decision trees and similar? chevron

Tsetlin Machines deliver competitive or superior accuracy to decision trees, random forests, SVMs, and logistic regression across standard benchmarks, whilst producing human-readable propositional logic rules that none of these methods natively offer at comparable accuracy. In the original 2018 paper, Tsetlin Machines matched or exceeded all five methods on tasks including digit recognition, board game planning, and the Iris dataset. The Integer-Weighted Tsetlin Machine variant has since outperformed decision trees, SVMs, random forests, XGBoost, and Explainable Boosting Machines on both average memory usage and F1-score in direct comparisons.

Can Tsetlin Machines be used for NLP and text? chevron

Yes they can. Tsetlin Machines have been successfully applied to sentiment analysis, fake news detection, word-sense disambiguation, keyword spotting, and medical text classification. On Chinese sentiment analysis tasks, Tsetlin Machine models have outperformed BERT, and an explainable fake news detection framework achieved higher F1-scores than both BERT and XLNet on the PolitiFact and GossipCop datasets. A dedicated Relational Tsetlin Machine variant uses first-order logic for natural language understanding, producing knowledge bases that are 10× more compact than alternatives while improving QA accuracy from 94.83% to 99.48%.

What is a convolutional Tsetlin Machine? chevron

A convolutional Tsetlin Machine (CTM) applies each clause as a convolution filter over image patches, functioning as an interpretable alternative to convolutional neural networks (CNNs). Instead of learning floating-point filter weights, a CTM learns propositional logic patterns (conjunctive clauses of Boolean literals) that can be visualised as human-readable images showing exactly what each filter has learned. On the MNIST handwritten digit benchmark, the convolutional Tsetlin Machine achieves 99.51% test accuracy, and learned filters clearly show recognisable digit shapes when visualised.

Are Tsetlin Machine predictions interpretable and explainable? chevron

Tsetlin Machines are inherently interpretable because every learned pattern is a conjunctive clause in propositional logic — a simple AND-rule over input features and their negations that a human can read directly. Unlike neural networks, which require post-hoc explanation methods like SHAP or LIME, Tsetlin Machine clauses reveal exactly which features caused a classification decision. This built-in explainability makes Tsetlin Machines especially valuable in safety-critical and regulated domains such as healthcare, where researchers have used them for interpretable breast cancer detection and medical text categorisation.

What is a Tsetlin automaton? chevron

A Tsetlin automaton is a fixed finite-state machine. First introduced by Mikhail Tsetlin in 1961, it solves the multi-armed bandit problem by maintaining a single integer as its entire memory, learning optimal actions purely through increment and decrement operations in response to rewards and penalties. Unlike variable-structure learning automata, which must maintain and update complex action probability vectors, a Tsetlin automaton is deterministic in structure and computationally minimal. It was the first learning automaton ever proposed, and it converges to the optimal action arbitrarily closely as its number of states increases.

Can Tsetlin Machines be implemented on FPGAs and custom hardware? chevron

Tsetlin Machines are exceptionally well-suited to hardware implementation because inference requires only bitwise AND, OR, and NOT operations. Like Logic-Based Networks, they contain no multiplication or floating-point arithmetic. FPGA implementations have achieved 3.3 million inferences per second at just 20 mW of additional power, and the first Tsetlin Machine ASIC in 65nm CMOS classifies MNIST digits at 60,300 frames per second while consuming only 8.6 nanojoules per classification. The MATADOR framework from Newcastle University automates the generation of Tsetlin Machine system-on-chip designs for edge applications, achieving up to 13.4× faster inference and 7× better resource efficiency than quantised neural network implementations.

Is there an open-source Tsetlin Machine library? chevron

The primary open-source library is TMU (Tsetlin Machine Unified), maintained by the Centre for Artificial Intelligence Research (CAIR) at the University of Agder and installable via pip install tmu. TMU supports standard, convolutional, regression, weighted, and coalesced Tsetlin Machine variants in Python with C and CUDA backends for high-performance computation. Additional implementations exist in C, C++, Julia, and Rust.

What types of machine learning tasks can Tsetlin Machines solve beyond classification? chevron

Tsetlin Machines now span classification, regression, image recognition, natural language processing, reinforcement learning, recommendation systems, clustering, and graph-based reasoning through specialised variants. The Regression Tsetlin Machine handles continuous outputs for tasks like housing price prediction, the Graph Tsetlin Machine performs logical learning on graph-structured data, and a Tsetlin Machine for contextual bandits was published at NeurIPS 2022. Real-world deployments include ECG analysis for heart disease detection, network intrusion detection for IoT cybersecurity, economic growth forecasting, and federated learning for privacy-preserving distributed AI.