Use Literal Labs’ software pipeline to train, test & deploy logic-based, LBN AI models that run over 50x faster and use over 50x less energy than neural networks. They’re also logically explainable.
From data ingestion to deployment, our pipeline automates and guides you through the entire process of training and deploying your own, fully-custom, logic-based AI models.
GPUs and accelerators aren’t the future. They’re the bottleneck. Literal Labs forges fast, efficient, explainable AI models from your data, enabling accurate, accelerator free AI performance everywhere from the battery-powered edge to cloud servers.
AI can’t scale if every model requires specialised hardware. We remove that barrier with logic-based networks. Faster, smaller, and explainable by design, they run efficiently on CPUs and microcontrollers, allowing you to avoid the GPU tax altogether.
LBNs aren’t an optimisation service. They’re high-speed, low-energy, explainable, logic-based AI models built from the ground up using Literal Labs’ exclusive architecture and algorithms. Most model training tools tweak what already exists. We craft what never did — an entirely new class of model, designed to perform where others can’t.
Learn more +Literal Labs’ pipeline doesn’t stop at trial and error. It intelligently tests thousands of logic-based network configurations and performs billions of calculations to refine them. The result: one model, perfectly trained on your data, and for your deployment and performance needs. Precision forged at scale.
Learn more +High performance shouldn’t demand racks of accelerators or spiralling cloud bills. LBNs run on microcontrollers, CPUs, and standard servers. They’re small enough for IoT, strong enough for enterprise. Already optimised, they require no GPUs, TPUs, or custom hardware. Deploy intelligence, not hardware bills.
Learn more +
The average LBN model is less than 40kB, and without sacrificing accuracy. Benchmarking shows that they’re trained with only ±2% accuracy difference compared to larger, resource-hungry AI algorithms. Small in memory, LBNs are sharp in performance.
Learn more +You don’t need a large engineering team to get results. Literal Labs’ pipeline simplifies and accelerates training, benchmarking, and deployment in-browser or through API, so you can fit it seamlessly into the way you work best.
AI assisted configurations
A billion models to one. The pipeline performs billions of calculations and tests, and benchmarks thousands of models, all to help you find the single most accurate and performative model for your chosen deployment. And it does it automatically and intelligently, making the decisions that an AI engineer would, to guide your model speedily towards completion.
Benchmark without guesswork
Stop wasting time on trial-and-error benchmarking. The pipeline automates the process, comparing candidate configurations and models directly against your data and deployment constraints. The result: clear performance metrics without the endless grind. Automated benchmarking. No second-guessing.
From data to deploy, all in browser
Upload a dataset. Define your deployment target. Watch an LBN train. And then deploy it. The pipeline streamlines the process so that anyone can create efficient AI models in a browser tab or through their existing workflow via API. No complex installs. No GPU farm. Just data in, model out.
Simple model retraining
Today’s perfect model might need refinement tomorrow. The pipeline makes retraining simple: upload fresh data and it can automate improving your existing models. Browser or script, it adapts to your workflow. Continuous improvement, without the overhead.
Fine tune further
Once it has crafted your model the pipeline can still refine it further. Adjust deployment parameters, push for lower energy, or tweak any accuracy trade-offs. The pipeline intelligently adapts, re-tuning the LBN to your exact requirements. Fine-tuning, without the fuss.
Extensive API for workflow integration (coming)
Automate your model training at scale or deeply integrate it with your existing workflows and pipelines through Literal Labs’ extensive API. Browser simplicity for some, code-level integration for others. Coming 2026 — get notified when it’s ready.
Deployment should be simple. On microcontrollers. On servers. On anything in between.
With Literal Labs, deployment is simple. LBNs are small enough for IoT devices, efficient enough for even battery-powered edge computing, and accurate enough for server workloads. One pipeline, countless deployment options.
Small and silicon-agnostic. LBNs embed on everything from coin-cell sensors to MCUs, running in C++ across Arm, RISC-V, and PowerPC architectures from any maker.
CPU-driven, cost-cutting, and cloud-ready. Deploy to Managed Inference or self-host for enterprise — no GPUs, no excess.
We’re getting ready to open the gates. Soon, you'll be able to train your own logic-based AI models using the very same tools our engineers use. Build, benchmark, and deploy forecasting models with zero code. And zero friction.
Enter your details below to be the first to know when our platform launches.
How do LBNs perform?
LBNs are an AI algorithm and an alternative to classic algorithms, such as neural networks. Because they’re logic-based, they’re fast, really fast. Even when run on an MCU, they perform over 50x faster than neural networks and consume over 50x less power. You can download our use-cases and benchmark results for more performance information.
What does the software pipeline do?
Literal Labs offers training software (aka a pipeline) for logic-based networks (LBNs); they’re an alternative to neural networks. The pipeline lets you train, benchmark, and deploy AI models that are faster, smaller, more efficient, and more explainable than those offered by classical machine learning approaches.
Do I need GPUs or specialised hardware to use it?
No. The pipeline runs on Literal Labs’ secure servers, while the models it trains can be deployed on MCUs, CPUs and other hardware and servers.
What types of data do LBNs support?
The current version supports CSV datasets, including time-series, sensor, and tabular data. Future releases will expand to additional data formats and their use cases. Subscribe to our newsletter to be alerted when additional data formats are supported and released.
How accurate are LBNs compared to neural networks?
Despite averaging under 8kB in size, LBNs typically perform within ±2% of the accuracy of much larger neural networks. LBNs balance accuracy with unmatched speed and efficiency.
Do I need an engineering team to build LBNs?
Definitely not. The training pipeline is designed to simplify the production of LBNs, and we’re constantly refining it so that companies can use its capabilities irrespective of whether they have an AI engineering team or a data science team. The pipeline offers browser-based GUI, making it easy for non-AI-experts and experts alike to train LBN models, while the API gives engineers deeper integration and control.
Where can I deploy LBNs?
Just about anywhere (within reason). LBNs are small and efficient enough to perform AI inference on microcontrollers and edge devices where LBNs are silicon-agnostic and deployable via C++. In fact, LBNs have even been tested on battery-powered IoT devices. But LBNs aren’t just for edge AI — they also scale to bring logic-based AI to CPUs and servers.
Are LBNs suitable for enterprise applications?
Absolutely. In fact, all our use cases document solutions for £100 million+ companies. LBNs are compact but powerful, making them ideal for high-volume or resource-constrained environments. Current deployment use cases include IoT, predictive maintenance, decision intelligence dashboards, and anomaly detection.
How does LBN model retraining work?
When your data changes, retraining is simple. Upload a new dataset via the GUI or script the process via API. You can then trigger the pipeline to re-train, re-benchmark, and fine-tune your existing models for redeployment.
Can the software train generative AI models?
Not yet. We are currently focussed on deterministic LBNs as opposed to generative tasks. However, we are constantly expanding both the capabiltiies of LBNs and the pipeline used to train them. To stay informed as capabilities expand, subscribe to our newsletter.