A New Generation of Artificial Intelligence

Literal Labs applies the Tsetlin machine approach to AI that is faster, explainable, and orders of magnitude more energy efficient than today’s neural networks.

Tsetlin-based AI

Ultra low power

Tsetlin algorithms are less compute heavy than NNs, and orders of magnitude lower energy usage per inference with acceleration

High throughput

250X faster inferencing with Tsetlin machine models, and up to 1,000X when accelerated

On chip training

Our technology uniquely enables edge training, without the need for cloud support

Explainable AI

Our architecture enables explainability and ensures accountability for decisions made

Tsetlin Approach

Similar to NNs in that it can perform complex machine learning training, Tsetlin is an alternative approach that offers significant current and future benefits over other AI architectures such as speed, energy efficiency, and explainability. 

Tsetlin-based AI is based on propositional logic, rather than biology, making it more efficient, speeding up inferencing and computationally less complex, and therefore less energy-intensive. 

Learn about our Technology
Our Research

AI architecture built on the Tsetlin
approach by world-leading experts

Meet the team
Supported by
Newcastle University logo
Silicon Catalyst logo
Cambridge Future Tech
"Big Tech is pouring billions into British AI investments"
"London's AI Future" (37:00), Interview with  Noel Hurley, Literal Labs CEO
"AI isn't energy efficient right now. This ex-Arm VP has joined a new startup to try to change that."
"Former Arm executives join ‘Tsetlin machine’ startup"
"AI startup wants to make GPU training obsolete with an extraordinary piece of tech"