A New Generation of Artificial Intelligence

Literal Labs applies the Tsetlin machine approach to AI that is faster, explainable, and orders of magnitude more energy efficient than today's neural networks.

Tsetlin-based AI

Ultra low power

Tsetlin algorithms are less compute heavy than neural networks, and orders of magnitude lower energy usage per inference with acceleration

High throughput

250X faster inferencing with Tsetlin machine models, and up to 1,000X when accelerated

On chip training

Our technology uniquely enables edge training, without the need for cloud support

Explainable AI

Our architecture enables explainability and ensures accountability for decisions made

Tsetlin Approach

Similar to neural networks in that it can perform complex machine learning training, Tsetlin is an alternative approach that offers significant current and future benefits over other AI architectures such as speed, energy efficiency, and explainability.

Tsetlin-based AI is based on propositional logic, rather than biology, making it more efficient, speeding up inferencing and computationally less complex, and therefore less energy-intensive.

AI architecture built on the Tsetlin approach by world-leading experts

Supported by

Literal Labs is supported by Newcastle University in the United Kingdom
Silicon Catalyst UK supports Literal Labs
Cambridge Future Tech have invested in Literal Labs

Press features

"Big Tech is pouring billions into British AI investments"
"London's AI Future" (starts 37:00) Bloomberg interview with Noel Hurley, Literal Labs CEO
"AI startup wants to make GPU training obsolete with an extraordinary piece of tech"
"Former Arm executives join ‘Tsetlin machine’ startup"
"AI isn't energy efficient right now. This ex-Arm VP has joined a new startup to try to change that."