Lightning icon Fast, efficient, explainable AI

Train the world’s most efficient AI models

Use Literal Labs’ software pipeline to train, test & deploy logic-based, LBN AI models that run over 50x faster and use over 50x less energy than neural networks. They’re also logically explainable.

Laurel wreath Laurel wreath
Laurel wreath Laurel wreath

From data ingestion to deployment, our pipeline automates and guides you through the entire process of training and deploying your own, fully-custom, logic-based AI models.

Break down of the LBN training pipeline showing configuration, training, versioning, benchmarking and testing, SDK generation, and deployment of LBN AI models to the edge or inference servers.
Training pipeline icon Assisted AI pipeline

How it gives you more

GPUs and accelerators aren’t the future. They’re the bottleneck. Literal Labs forges fast, efficient, explainable AI models from your data, enabling accurate, accelerator free AI performance everywhere from the battery-powered edge to cloud servers.

AI can’t scale if every model requires specialised hardware. We remove that barrier with logic-based networks. Faster, smaller, and explainable by design, they run efficiently on CPUs and microcontrollers, allowing you to avoid the GPU tax altogether.

LBN
NN

Original, not just optimisation

LBNs aren’t an optimisation service. They’re high-speed, low-energy, explainable, logic-based AI models built from the ground up using Literal Labs’ exclusive architecture and algorithms. Most model training tools tweak what already exists. We craft what never did — an entirely new class of model, designed to perform where others can’t.

Up to
54x faster
inference with LBNs
Up to
52x more
energy efficiency with LBNs
Logic-based AI makes LBNs
More
Explainable
than neural networks
Model efficiency score AI model battery use benchmarking Size of LBN AI model benchmarking

A billion to one

Literal Labs’ pipeline doesn’t stop at trial and error. It intelligently tests thousands of logic-based network configurations and performs billions of calculations to refine them. The result: one model, perfectly trained on your data, and for your deployment and performance needs. Precision forged at scale.

AI without the GPUs

High performance shouldn’t demand racks of accelerators or spiralling cloud bills. LBNs run on microcontrollers, CPUs, and standard servers. They’re small enough for IoT, strong enough for enterprise. Already optimised, they require no GPUs, TPUs, or custom hardware. Deploy intelligence, not hardware bills.

AI models which do not require a GPU, TPU or accelerator
±2%
accuracy difference

Accuracy without excess

The average LBN model is less than 40kB, and without sacrificing accuracy. Benchmarking shows that they’re trained with only ±2% accuracy difference compared to larger, resource-hungry AI algorithms. Small in memory, LBNs are sharp in performance.

Code icon Training LBNs

GUI when you want, API when you don’t, seamlessly

You don’t need a large engineering team to get results. Literal Labs’ pipeline simplifies and accelerates training, benchmarking, and deployment in-browser or through API, so you can fit it seamlessly into the way you work best.

AI assisted configurations chevron

A billion models to one. The pipeline performs billions of calculations and tests, and benchmarks thousands of models, all to help you find the single most accurate and performative model for your chosen deployment. And it does it automatically and intelligently, making the decisions that an AI engineer would, to guide your model speedily towards completion.

Configure LBN model training settings

Benchmark without guesswork chevron

Stop wasting time on trial-and-error benchmarking. The pipeline automates the process, comparing candidate configurations and models directly against your data and deployment constraints. The result: clear performance metrics without the endless grind. Automated benchmarking. No second-guessing.

Browse LBN benchmarking accuracy and energy

From data to deploy, all in browser chevron

Upload a dataset. Define your deployment target. Watch an LBN train. And then deploy it. The pipeline streamlines the process so that anyone can create efficient AI models in a browser tab or through their existing workflow via API. No complex installs. No GPU farm. Just data in, model out.

Deploy AI model to inference server or SDK

Simple model retraining chevron

Today’s perfect model might need refinement tomorrow. The pipeline makes retraining simple: upload fresh data and it can automate improving your existing models. Browser or script, it adapts to your workflow. Continuous improvement, without the overhead.

Retrain LBN AI model

Fine tune further chevron

Once it has crafted your model the pipeline can still refine it further. Adjust deployment parameters, push for lower energy, or tweak any accuracy trade-offs. The pipeline intelligently adapts, re-tuning the LBN to your exact requirements. Fine-tuning, without the fuss.

Change AI model deployment target

Extensive API for workflow integration (coming) chevron

Automate your model training at scale or deeply integrate it with your existing workflows and pipelines through Literal Labs’ extensive API. Browser simplicity for some, code-level integration for others. Coming 2026 — get notified when it’s ready.

import requests
API_URL = "https://www.literal-labs.ai/api/"
TOKEN = "your_api_token"
resp = requests.get(API_URL, headers={"Authorisation": f"Bearer {TOKEN}"})
print(resp.status_code, resp.text[:500])
Download and deploy LBN AI model file icon Deploying LBNs

AI that performs where ever you need it

Deployment should be simple. On microcontrollers. On servers. On anything in between.

With Literal Labs, deployment is simple. LBNs are small enough for IoT devices, efficient enough for even battery-powered edge computing, and accurate enough for server workloads. One pipeline, countless deployment options.

Embed on edge

Small and silicon-agnostic. LBNs embed on everything from coin-cell sensors to MCUs, running in C++ across Arm, RISC-V, and PowerPC architectures from any maker.

Chip icon
Emass Infineon Microchip Nordic Semiconductor NXP Onsemi Renesas Electronics Silicon Labs ST Texas Instruments
Emass Infineon Microchip Nordic Semiconductor NXP Onsemi Renesas Electronics Silicon Labs ST Texas Instruments
Emass Infineon Microchip Nordic Semiconductor NXP Onsemi Renesas Electronics Silicon Labs ST Texas Instruments
Querying of an AI model to generate an outcome for decision intelligence systems Querying of an AI model to generate an outcome for decision intelligence systems
managed AI server icon Managed Inference Server

Execute on server

CPU-driven, cost-cutting, and cloud-ready. Deploy to Managed Inference or self-host for enterprise — no GPUs, no excess.

Globe icon Real-world impact

Case studies and benchmarks

Hydroinformatics AI models

Helping hydro

Embedded hydro-informatics AI runs on battery sensors in sewers, monitoring harsh flows without cloud.

Supply chain forecasting explainable AI models

Well stocked, well fed

Inventory forecasts cut from 4 hours to 3 minutes, with 2× WMAPE accuracy across thousands of SKUs.

In-car explainable AI models

Fast cars

Edge AI for cars: logic-based models deployed in tight spaces where only PowerPC can run.

Anomaly detection AI models

Anomalous performance

Benchmarked 52× faster than like-for-like best-in-class for ToyADMOS anomaly detection.

Battery powered AI models

Battery life

LBNs cut power use so far a coin-cell runs 10 years, up from 3 months on an RNN.

Logic-based AI versus XGBoost

Boosted boost

LBNs run up to 250× faster than XGBoost while using 130kB less memory.

Play button icon Your pipeline begins here

Train your own models — soon

We’re getting ready to open the gates. Soon, you'll be able to train your own logic-based AI models using the very same tools our engineers use. Build, benchmark, and deploy forecasting models with zero code. And zero friction.

Enter your details below to be the first to know when our platform launches.

Frequently asked questions

How do LBNs perform? chevron

LBNs are an AI algorithm and an alternative to classic algorithms, such as neural networks. Because they’re logic-based, they’re fast, really fast. Even when run on an MCU, they perform over 50x faster than neural networks and consume over 50x less power. You can download our use-cases and benchmark results for more performance information.

What does the software pipeline do? chevron

Literal Labs offers training software (aka a pipeline) for logic-based networks (LBNs); they’re an alternative to neural networks. The pipeline lets you train, benchmark, and deploy AI models that are faster, smaller, more efficient, and more explainable than those offered by classical machine learning approaches.

Do I need GPUs or specialised hardware to use it? chevron

No. The pipeline runs on Literal Labs’ secure servers, while the models it trains can be deployed on MCUs, CPUs and other hardware and servers.

What types of data do LBNs support? chevron

The current version supports CSV datasets, including time-series, sensor, and tabular data. Future releases will expand to additional data formats and their use cases. Subscribe to our newsletter to be alerted when additional data formats are supported and released.

How accurate are LBNs compared to neural networks? chevron

Despite averaging under 8kB in size, LBNs typically perform within ±2% of the accuracy of much larger neural networks. LBNs balance accuracy with unmatched speed and efficiency.

Do I need an engineering team to build LBNs? chevron

Definitely not. The training pipeline is designed to simplify the production of LBNs, and we’re constantly refining it so that companies can use its capabilities irrespective of whether they have an AI engineering team or a data science team. The pipeline offers browser-based GUI, making it easy for non-AI-experts and experts alike to train LBN models, while the API gives engineers deeper integration and control.

Where can I deploy LBNs? chevron

Just about anywhere (within reason). LBNs are small and efficient enough to perform AI inference on microcontrollers and edge devices where LBNs are silicon-agnostic and deployable via C++. In fact, LBNs have even been tested on battery-powered IoT devices. But LBNs aren’t just for edge AI — they also scale to bring logic-based AI to CPUs and servers.

Are LBNs suitable for enterprise applications? chevron

Absolutely. In fact, all our use cases document solutions for £100 million+ companies. LBNs are compact but powerful, making them ideal for high-volume or resource-constrained environments. Current deployment use cases include IoT, predictive maintenance, decision intelligence dashboards, and anomaly detection.

How does LBN model retraining work? chevron

When your data changes, retraining is simple. Upload a new dataset via the GUI or script the process via API. You can then trigger the pipeline to re-train, re-benchmark, and fine-tune your existing models for redeployment.

Can the software train generative AI models? chevron

Not yet. We are currently focussed on deterministic LBNs as opposed to generative tasks. However, we are constantly expanding both the capabiltiies of LBNs and the pipeline used to train them. To stay informed as capabilities expand, subscribe to our newsletter.