Code that ships where your hardware lives
Train an LBN. Receive a self-contained SDK in portable C. Embed it.
You're ready to compile, integrate, and deploy edge AI on the hardware you already use.
Everything included Nothing else required
When your Logic-Based Network finishes training, the platform produces a complete SDK. Not a binary. Not a dependency chain. A single package of portable C source code containing the model, and the inference engine, build configuration, example code, and documentation.
There is nothing extra to install. Nothing to download. Nothing to link against. An LBN's SDK is self-contained by design — engineered so that the moment it reaches your team, integration work can begin.
Your toolchain Your rules
The SDK arrives as source code, not a pre-compiled artefact. Your engineers compile it natively on whatever hardware and toolchain the edge project already uses. No new build systems. No unfamiliar dependencies. No compromises.
Integration follows standard embedded C practices. Add the SDK directory to your existing project. Include the header. Then initialise the LBN, pass in your data, and receive a prediction.
You can compile the LBN, statically into your project and alongside your application, precisely as you would any other module. And because the SDK is written in C, it integrates naturally with C++ codebases as well.
// ------------------------------------------------------------------------------ // ██╗ ██╗████████╗███████╗██████╗ █████╗ ██╗ ██╗ █████╗ ██████╗ ███████╗ // ██║ ██║╚══██╔══╝██╔════╝██╔══██╗██╔══██╗██║ ██║ ██╔══██╗██╔══██╗██╔════╝ // ██║ ██║ ██║ █████╗ ██████╔╝███████║██║ ██║ ███████║██████╔╝███████╗ // ██║ ██║ ██║ ██╔══╝ ██╔══██╗██╔══██║██║ ██║ ██╔══██║██╔══██╗╚════██║ // ███████╗██║ ██║ ███████╗██║ ██║██║ ██║███████╗ ███████╗██║ ██║██████╔╝███████║ // ╚══════╝╚═╝ ╚═╝ ╚══════╝╚═╝ ╚═╝╚═╝ ╚═╝╚══════╝ ╚══════╝╚═╝ ╚═╝╚═════╝ ╚══════╝ // Edge AI SDK // Copyright (c) Literal Labs 2026 // For full license text, see the LICENSE.md file in the SDK root. // // This file was autogenerated using the Literal Labs pipeline. Do not modify // this file directly. If you need to make changes, please reinvoke the // pipeline with the appropriate configuration. // ------------------------------------------------------------------------------ #include <stdio.h> #include <stdlib.h> #include "ll_wrapper.h"
Built for any target Tested against many
The generated code uses no dynamic memory allocation and carries a minimal footprint. It is engineered to be portable in the truest sense — not merely “cross-platform” in name, but genuinely compilable wherever a standard C compiler exists.
Literal Labs’ training platform validates every SDK against amd64, ARMv7-M, and PowerPC targets as a matter of course. Yet the source code is not confined to these architectures. Any platform with a conforming C compiler is a valid deployment target.
Engineered to disappear
The best infrastructure is invisible, even when it’s an edge AI model inside an IoT sensor or other device. Each custom-generated SDK is designed to integrate so cleanly into your existing workflow that, after the initial setup, you scarcely notice it is there. No new toolchains to learn. No runtime surprises. No maintenance burden.
This is deliberate. An SDK that demands attention is an SDK that slows you and your product down. Literal Labs’ approach is to deliver inference as a native component of your application — compiled in, statically linked, and indistinguishable from code your own team wrote.
Embed logic-based intelligence at the edge
Join early access to the platform that trains efficient, explainable AI models — and delivers them ready to compile into your devices.