Edge AI London: MERIT

Professor Alex Yakovlev, Co-Founder of Literal Labs, will present at Edge AI London this June — bringing to the stage a new methodology that may be long overdue in the of embedded and edge artificial intelligence.

His session, on Tuesday 9 June, introduces MERIT: Model Efficiency and Resource ImpacT. It is a single, bounded metric designed to measure both predictive performance and energy cost simultaneously — something the existing frameworks have, until now, handled poorly. Scores such as ACEv2 are unbounded, which makes them difficult to interpret at a commercial level. MERIT is not. It runs on a 0–100 scale, increases strictly with accuracy, and decreases strictly with energy cost. A cleaner instrument, in other words, for a comparison that matters increasingly as AI moves onto hardware with genuine constraints.

The context is Edge AI deployment on low-cost embedded platforms — microcontrollers, standard CPUs — where neural networks, even in quantised form, remain too expensive to run at scale. Logic-Based Networks have demonstrated inference improvements of up to 100x against binarised neural networks on established benchmarks. MERIT gives developers and executives alike a means to evaluate that advantage in a single figure, without ambiguity about what the number means or where it ends.

Edge AI London attracts precisely the audience for whom this matters: engineers deciding which model architecture to deploy, and the people who must price and justify those decisions upward. MERIT is intended to serve both.

Professor Yakovlev's presentation will also demonstrate the training of LBN models with ModelMill and deployment of those same models across a range of embedded targets. Attendees at Edge AI London can find the session on the Edge AI London's main stage at 15:40 on the 9th of June.