Literal Labs' co-founder Professor Alex Yakovlev and CTO Leon Fedden recently sat down (and stood up) with electronics engineer Elliott Lee-Hearn of ipXchange is rethinking AI at the edge with an alternative to neural networks: Tsetlin Machines.
Unlike traditional machine learning, which relies on heavy matrix operations and opaque black-box methods, Tsetlin Machines take a logic-based approach. This enables models that are transparent, lightweight, and energy-efficient — capable of running on microcontrollers with extremely limited memory and power. For engineers working on real-world edge AI, this represents a genuine shift: explainable AI that performs where it’s needed most, without the burden of accelerators or excessive compute.
In the interview, Alex and Leon explain what makes Tsetlin Machines different, why explainability in AI matters, and how Literal Labs’ technology is designed to overcome the size, speed, and opacity challenges of today’s neural network approaches. They also highlight the kinds of applications where logic-based models are the best fit, from embedded IoT to decision intelligence at scale.
If you’re struggling with models that are too large, too slow, or too opaque, this discussion offers a clear view of how we’re building a more sustainable and practical path forward for AI at the edge.
Visit 'Tsetlin Machines vs Neural Networks: A Logic-Based Alternative for Edge AI' to watch the video in full. It features Alex Yakovlev and Leon Fedden and was released 26 Sep 2025.
Literal Labs is a UK startup pioneering logic-based AI, whose Logic-Based Networks (LBNs) deliver up to 54× faster inference with around 52× less energy use than neural networks, running efficiently on standard MCUs and CPUs without the need for GPUs.