Lightning icon LBN training platform

Train logic-based AI models intelligently

One platform. Every stage of training. From automated configuration to benchmarking and deployment, train logic-based network (LBN) AI models using a platform that guides each step of the way. Work in your browser or via API, and produce models that are faster, efficient, explainable, and ready for the real world.

Laurel wreath
Laurel wreath
Laurel wreath
Laurel wreath
Laurel wreath
Laurel wreath
Laurel wreath
Laurel wreath
Laurel wreath
Laurel wreath
Laurel wreath
Laurel wreath
Screenshot of LBN training platform
How to train LBNs How to train LBNs

Training, guided by logic

It starts with data. It ends with a model precision-tuned to your goals. Upload a dataset, define your deployment targets, and let the platform guide configuration, training, and tuning. Every LBN is shaped to the exact balance of accuracy, performance, and hardware you need.

Structured from the start

LBNs begin with training data.

Whether proprietary or open-source, your data enters the platform ready to be shaped. Annotation improves accuracy, while automated preprocessing applies logic-based transformations that strip away noise, highlights signals, and prepares datasets for efficient, high-quality training.

The first steps in training an LBN model require importing and annotating of data, after which preprocessing and binarisation are automated.

Precision configured

You define the outcome. The platform does the rest.

You describe how the model should behave and where it must run, from energy use to memory limits and instruction set. The platform then automatically explores and configures thousands of LBN variants, applying intelligent AutoML decisions to converge quickly on the optimal AI model.

Training at scale

With configuration set, training begins.

The platform automatically trains thousands of logic-based models in parallel, each shaped by your constraints. Performance is measured continuously, allowing the system to converge rapidly on the model that best balances speed, accuracy, and energy use.

0.81 0.855 0.9 0.945 0.99 1 10 20 30 40 50 Training History Performance metrics tracked during model training across epochs Accuracy score F1 score Precision score Recall score Epoch Score

Validated performance

A billion to one.

The platform automatically tests and benchmarks thousands of trained LBNs, performing billions of calculations to identify the few models that best fit your data and deployment constraints. Each candidate is evaluated objectively, removing guesswork and trial and error.

Performance is verified through automated benchmarking and, where needed, remote hardware validation. Accuracy, speed, and energy use are measured under real-world conditions, giving you a clear, trusted view of how each model will perform before it ever ships.

The first steps in training an LBN model require importing and annotating of data, after which preprocessing and binarisation are automated.

Trained to deploy

Once the optimal LBN is selected, the platform packages it precisely for where it will run. Whether embedded at the edge or executed on managed servers, each model is delivered ready to perform, using the hardware you already have. No accelerators. No excess. Just AI that fits.

SDK code icon SDK
Export lightweight, optimised C++ models for embedded deployment on MCUs, DSPs, and edge devices
Managed Inference Server icon Managed Inference Server
Run models on Literal Labs’ CPU-based inference servers, accessed via API and scaled for enterprise workloads.
Deploying LBNs is possible to a Managed Inference Server instance, or to an edge device using C++ code
Literal Labs logo mark

Start training logic-based AI

Join early access to the platform designed to train efficient, explainable AI models that scale without new infrastructure.

Request early access

01 Import your datasets

Upload your own data or start with open datasets. Prepare it once. Train with confidence.

02 Train and tune

Intelligent AutoML guides training towards the optimal balance of speed, accuracy, and efficiency.

03 Deploy logically

Embed models via SDK or execute them on managed inference servers — no GPUs required.

Frequently asked questions

What does the software platform do? chevron

The platform is used to train, benchmark, and deploy Logic-Based Network (LBN) models. It takes care of data preparation, model training, optimisation, and evaluation, then produces deployable models ready for use in production systems. The goal is to remove the operational and technical friction typically associated with building and deploying AI.

Can the software train generative AI models? chevron

Not yet. The platform is currently focussed on deterministic LBNs rather than generative models. This reflects a deliberate focus on reliability, efficiency, and explainability for industrial and operational use cases. Capabilities continue to expand over time; to stay informed as new model types are introduced, you can subscribe to the newsletter.

What types of data does the platform support? chevron

The platform currently supports structured datasets provided in CSV format, including time-series, sensor, and tabular data. These data types are common across industrial, operational, and decision intelligence applications and align well with the strengths of LBNs. Support for additional formats will be added over time.

How experienced at AI does the platform user need to be? chevron

Very little prior AI experience is required. The platform provides a guided, browser-based interface that allows users to train models with minimal configuration. For more technical teams, an API is available to enable deeper control, automation, and integration into existing workflows, but this is optional rather than required.

Where can the platform deploy LBNs to? chevron

Models trained by the platform can be deployed across a wide range of environments, from embedded and edge devices through to on-premise servers and cloud infrastructure. The platform is designed to support real-world deployment scenarios rather than limiting models to research or hosted inference environments.

What hardware is it compatible with for deployment? chevron

LBNs produced by the platform are hardware-agnostic. They can run efficiently on microcontrollers, standard CPUs, and server-class hardware without requiring GPUs or specialised accelerators. This makes deployment possible in resource-constrained environments as well as at scale.

Does the platform manage training infrastructure? chevron

Yes. Training is handled on managed infrastructure, so users do not need to provision, maintain, or optimise their own training hardware. This reduces operational overhead and allows teams to focus on data, use cases, and outcomes rather than infrastructure.

How does the platform fit into existing systems? chevron

The platform is designed to integrate cleanly with existing data pipelines and production systems. Trained models can be exported and embedded directly into applications, devices, or services, while the API enables automation and integration into CI/CD or MLOps-style workflows.