Overcome the infrastructure strain of modern AI. Logic-based network (LBN) algorithms deliver fast, accurate, and explainable inference that runs on the smallest semiconductors. This isn’t optimisation — it’s a solution.
Because scaling AI shouldn’t require scaling the planet. Logic-based AI runs on existing hardware and the chips we've likely already deployed. They cut energy use and deployment costs by up to 90%. They deliver power without the power plants.
training and inference
LBNs run on MCUs and CPUs
Output that’s
and hallucination-free
powered, IoT device compatible
Most AI today is built from numbers — vast, costly math sums that burn power and hardware alike. LBNs are built from logic. They run lean, explainable, and fast on existing silicon, no accelerators needed. What powers them is a series of technologies that make logic itself the engine of intelligence.
Powerful by design. Practical by intent. LBNs are designed for industrial AI where speed, efficiency, and explainability matter more than novelty. They're trained using private or open data, creating custom models that deploy across edge or servers. From forecasting to detection, the catalogue of supported use cases is expanding, with more models released regularly.
Now that you’ve seen what logic-based AI can do, train your own models using the platform which builds their speed, efficiency, and explainability.
What exactly is a Logic-Based Network (LBN)?
A Logic-Based Network (LBN) is an AI model built from logical expressions rather than numerical weight matrices. Instead of learning by adjusting millions of floating-point parameters, an LBN learns combinations of logical rules. This makes LBNs fast, efficient, explainable, and deterministic by design.
How do LBNs differ architecturally from neural networks?
Neural networks are composed of layers of numerical operations, primarily matrix multiplication followed by non-linear functions. LBNs, by contrast, are composed of logical structures that evaluate conditions and combinations of conditions. Architecturally, this replaces dense numerical computation with symbolic logic evaluation, resulting in smaller models and far lower computational overhead.
Why does using logic instead of numerical optimisation matter?
Logic-based learning avoids the heavy cost of numerical optimisation and operation, which requires repeated floating-point operations and specialised hardware such as GPUs. By operating in a logical domain, LBNs can learn and infer using simple, discrete operations. This dramatically reduces compute, energy use, and latency, while improving predictability and deployment flexibility, including the ability to deploy models without GPUs or any other accelerator.
How are logical expressions learned during training?
During training, the LBN incrementally constructs and refines logical expressions that best explain the observed data. These expressions are evaluated against the dataset, reinforced when they contribute to correct predictions, and weakened or discarded when they do not. Over time, the network converges on a compact set of logic rules that collectively define the model's behaviour.
Are LBNs trained using deep learning?
Yes. LBNs are trained using deep learning techniques, but with a fundamentally different internal representation. Instead of learning numerical weights within layered matrices, the training process learns and refines logical expressions. This means LBNs benefit from the strengths of deep learning — such as learning complex patterns from data — while avoiding the heavy computational cost and opacity typically associated with neural networks.
What role does data binarisation play in LBNs?
Data binarisation converts raw input features into logical signals that can be evaluated efficiently. This step allows continuous or categorical data to be expressed in a form suitable for logical reasoning. Binarisation is central to LBN performance, as it bridges real-world data with the logical structures used during training and inference.
How do LBNs scale with increasing data dimensionality?
LBNs scale well with high-dimensional data because they do not rely on dense parameter matrices. Instead, they selectively construct logical rules over relevant features. This means additional dimensions increase expressive capacity without a proportional increase in computational cost, making LBNs well suited to wide, sparse, or sensor-heavy datasets.
What are the accuracy–efficiency trade-offs of LBNs?
When trained on the same datasets and evaluated on the same tasks, LBNs typically achieve accuracy within ±2% of comparable neural network models. In several decision intelligence use cases, LBNs have gone further, delivering accuracy improvements of up to 20% over neural networks trained on the same data. These gains are achieved alongside substantial improvements in efficiency, making them better suited to commercial environments where both accuracy and efficiency matter.
How explainable are LBN decisions in practice?
LBNs are inherently explainable. Because decisions are made through logical expressions, it is possible to inspect which conditions contributed to a given output. This enables clearer reasoning about model behaviour, supports debugging and validation, and helps meet regulatory or operational transparency requirements.
What types of problems are LBNs particularly well-suited to?
LBNs excel in problems where efficiency, determinism, and interpretability are critical. Typical examples include anomaly detection, predictive maintenance, time-series analysis, sensor fusion, and decision intelligence.
What types of data can LBNs be trained with?
LBNs can be trained on structured datasets such as time-series, sensor, and tabular data, typically provided in CSV format. These data types align naturally with logical representations and are common in industrial, operational, and analytical applications. Support for additional data formats will expand over time.
Where can I deploy LBNs?
LBNs are highly portable. Their small size and low compute requirements allow deployment on microcontrollers, edge devices including battery-powered IoT devices, CPUs, and servers. They are silicon-agnostic and can be integrated into embedded systems, on-premise infrastructure, or cloud environments without requiring specialised accelerators.