The GPU stopped before the edge. We didn't.

ModelMill trains Logic-Based Networks. They're a wholly different architecture for AI that runs up to 54× faster while using 52× less energy than neural networks, on any CPU, MCU, or 32-bit semiconductor. No GPU. No cloud dependency. No new hardware. All edge.

Laurel wreath
Laurel wreath
Laurel wreath
Laurel wreath
Laurel wreath
Laurel wreath
Laurel wreath
Laurel wreath
Laurel wreath
Laurel wreath
Laurel wreath
Laurel wreath
Lightning icon AI without the accelerator or GPU

A better class of AI Built for the edge

ModelMill is the training platform for Logic-Based Networks — Literal Labs' proprietary AI architecture. LBNs aren't neural networks trimmed and compressed until they barely run. They're a ground-up rethink of how AI works, replacing floating-point multiplication with propositional logic. All while maintaining deep learning. The result is an AI model that’s smaller, faster, and more energy-efficient by design. Not by sacrifice.

Most platforms optimise then cripple models to survive at the edge. ModelMill trains models that thrive there.

Fast AI architecture

Faster

MLPerf Tiny benchmark, ARM Cortex-M7. Logic instead of multiplication.

Low energy AI model

Less energy

Battery-powered deployments that run for years, not months.

AI without a GPU

No special hardware

Any 32-bit processor. ARM, RISC-V, x86, ESP, PowerPC. No new SoCs. No new capex.

Deterministic AI model

Fully deterministic

Same data in, same answer out. Every time. No hallucinations. No variance.

ModelMill platform
Coding icon Edge’s logical algorithm

Edge insights stall at the algorithm no longer

Edge devices are everywhere. They monitor sewers, make cars safe, monitor supply chains and manufacturing, and guard critical infrastructure. Deploying neural network AI on them? That’s where the industry has stalled. Until now.

Logic-based AI makes LBNs
More
Explainable
than neural networks
Neural networks black box

Not a black box

ModelMill produces models whose training can be explained and whose decisions can be traced, understood, and audited. All with a clear chain of reasoning.

embed AI model on Arm, RISC-V, X86 or PowerPC

Embed on edge

The LBN SDK ships as pure C. It runs on ARM, RISC-V, PowerPC, and x86 from any manufacturer, whether a sub-$1 MCU or a decades-old industrial processor. No new silicon. No board redesign.

±2%
accuracy difference

Accuracy without excess

The average LBN model is under 40kB, with accuracy within ±2% of larger, GPU-dependent algorithms. Lean in memory. Sharp in performance.

What makes LBNs different?

LBNs aren't neural networks. They're a new class of AI model, built from logic instead of weights.

Learn more right arrow
Over
50x faster
inference with LBNs
Fast AI architecture

54× faster

Inference speed benchmarked on ARM Cortex-M7 against a neural network FC Autoencoder, using the MLPerf Tiny anomaly detection specification.

Over
50x more
energy efficiency with LBNs
Low energy AI model

52× less energy

455µJ per inference or forecast. Measured on a coin-cell equivalent battery. That's 10 years of predictions every 5 seconds with no new battery.

embed AI model on Arm, RISC-V, X86 or PowerPC

On-device, off cloud

LBNs run without a GPU or a cloud connexion. Inference happens on device, avoiding data streaming and round-trip latency. Typically under 5 kB, models fit where neural networks cannot.

ModelMill icon Milling models

The edge is 4 steps away

From raw data to a deployment-ready model, ModelMill handles the complexity so you don't have to. Import your data, set your target hardware, and let the platform train, benchmark, and package your LBN — ready to embed wherever you need it.

01 — Import

Your data,
processed & annotated

Upload your dataset via browser or API. CSV, JSON, or ZIP. Up to 5GB. Pre-processing, normalisation, and annotation are all handled in-platform.

Annotate machine learning data

02 — Target

Tell it where
to run

Define your deployment hardware and set performance priorities — energy, speed, or memory. The platform configures itself around your constraints automatically.

AI model optimised for target hardware

03 — Train

The platform
does the work

ModelMill auto-configures and trains hundreds of LBN candidates in parallel, benchmarking each one against your target hardware and performance goals.

AutoML multiple AI model training

04 — Deploy

Your model,
small hardware

Select your candidate and ModelMill wraps it into a C SDK, complete with inference engine, build configuration, and documentation for embedded or server deployment.

Generate C code SDK for AI model
Learn more about ModelMill right arrow
Annotate machine learning data
AI model optimised for target hardware
AutoML multiple AI model training
Generate C code SDK for AI model
Globe icon Real-world impact

Use cases and benchmarks

Looking to understand how and where LBNs can be deployed? From benchmarks to deployments, LBNs are working in production across automotive, utilities, supply chain, and semiconductors.

Hydroinformatics AI modelsExpand

Helping hydro

Embedded hydro-informatics AI runs on battery sensors in sewers, monitoring harsh flows without cloud.

Learn more right arrow
Supply chain forecasting explainable AI models

Well stocked, well fed

Inventory forecasts cut from 4 hours to 3 minutes, with 2× WMAPE accuracy across thousands of SKUs.

In-car explainable AI models

Fast cars

Edge AI for cars: logic-based models deployed in tight spaces where only PowerPC can run.

Anomaly detection AI modelsExpand

Anomalous performance

Benchmarked 54× faster than like-for-like best-in-class for ToyADMOS anomaly detection.

Learn more right arrow
Battery powered AI models

Battery life

LBNs cut power use so far a coin-cell runs 10 years, up from 3 months on an RNN.

Logic-based AI versus XGBoostExpand

Boosted boost

LBNs run up to 250× faster than XGBoost while using 130kB less memory.

Learn more right arrow
Globe icon Be early to logic-based AI

Book a 20 minute intro call & demo

Frequently asked questions

Join the companies already training LBN models that run faster, use less energy, and embed on the hardware they already own.

Get started with ModelMill
ModelMill logo mark

Frequently asked questions

How do I train an AI model using Literal Labs? chevron

Training LBN models with Literal Labs is designed to be straightforward. Upload your dataset to ModelMill via the browser-based interface, configure a small number of training options, and start training. ModelMill handles data preparation, model training, benchmarking, and optimisation automatically. Once complete, your trained model is ready for deployment to your chosen hardware.

What types of use cases does the platform support today? chevron

It’s currently focussed on industrial and operational AI use cases, including anomaly detection, predictive maintenance, time-series forecasting, sensor analytics, and decision intelligence. These are problems where reliability, efficiency, and explainability matter as much as raw accuracy. Support continues to expand as new model types and capabilities are released.

Do I need AI or data science expertise to use the platform? chevron

No. The platform is built to be usable by teams without dedicated AI or data science specialists. Most workflows can be completed through the guided web interface with minimal configuration. For engineering teams that want deeper control, the API provides advanced options and tighter integration, but this is entirely optional.

Do I need GPUs or specialised hardware to use it? chevron

No. Model training is handled by Literal Labs' managed infrastructure, and trained models do not require GPUs or accelerators to run. Models produced by the platform are designed to operate efficiently on standard CPUs, microcontrollers, and edge devices, as well as on servers.

What infrastructure do I need to get started? chevron

Very little. To begin, you only need a supported dataset and a web browser. There is no requirement to provision training infrastructure, manage clusters, or install complex toolchains. Deployment targets can range from embedded devices to cloud servers, depending on your use case.

How does Literal Labs differ from traditional ML platforms? chevron

Traditional ML platforms are built around large, numerically intensive models that demand specialised hardware and complex operational pipelines. Literal Labs takes a different approach, producing compact, efficient models optimised for real-world deployment. The result is faster training, simpler deployment, lower energy consumption, and models that are easier to understand and maintain.

How do I get access, pricing, or start a pilot? chevron

You can request access or discuss pricing directly by contacting us. The team offers guided pilots for companies that want to evaluate the platform on real data and real deployment targets. This allows you to assess performance, integration effort, and business value before committing further.