Within each of Literal Labs’ AI models sits something known as a Tsetlin machine. It’s one of the foundations which enables our AI models to be naturally explainable while offering state-of-the-art performance.
A Tsetlin machine (TM) is a field of machine learning algorithms, and associated computational architectures, that uses the principles of learning automata, called Tsetlin automata, and game theory to create logic propositions for the classification of data obtained from the environment surrounding this machine. These automata configure connections between input literals, representing the features in the input data, and propositions that are used to produce classification decisions. Then, on the basis of the detection whether the decisions were correct or erroneous, the feedback logic calculates penalties and rewards to the automata.
The TM approach is based on the ideas of the Soviet mathematician Mikhail Tsetlin. At that early stage of development in the 1960s of what was later called artificial intelligence (AI), Tsetlin realised the potential of modelling learning and logic as automata as opposed to detailed models of elementary biological neurons that other researchers had been pursuing then, and which later became neural networks (NNs).
Tsetlin and his associates including Victor Varshavsky, advisor to Literal Labs' co-founder Alex Yakovlev, over the period from 1960 until Tsetlin's untimely death in 1966, developed theories, models, algorithms, computer programmes and applications that demonstrated the effectiveness of this approach in solving various analysis, optimisation and adaptive control problems in numerous applications from engineering to economics, sociology and medicine.
After 1966, the research branched into various fields including control of complex systems, and circuit design, but, in its holistic form, the Tsetlin approach to AI was left largely untouched.
The breakthrough algorithm combining Tsetlin automata with propositional logic was originally published in 2018 by Ole-Christoffer Granmo, chair of Literal Labs' Technical Steering Committee and a professor from Norway's University of Agder. Its operation was initially demonstrated in image recognition by constructing propositions (known as clauses) in logic, based on literals and configuration connections controlled by Tsetlin automata.
The combination of the Tsetlin automata and the propositional logic gives rise to a very efficient (energy and performance) computational model for ML. This process can be used to model complex behaviour of systems in the form of teams or collectives of automata, allowing us to reach the optimal decisions in complex systems with greater reliability and redundancy, and operates on the principles of statistical optimality, physical distribution in space and alleviate the criticalities and anomalies.
The training of Tsetlin automata is achieved through the process of evolution of each automaton through its states, forming a linear sequence. Each state represents the level of confidence of the automaton in performing its actions. The actions of the automaton are associated with two subsets of states, one for an action switching a connection between an input literal and a clause ON and the other for switching it OFF. When the states are organised in a linear sequence we can control this level of confidence by applying simple transitions between the states, thereby either rewarding or penalising the automaton's actions. These actions are somewhat similar to weights in NNs. However, unlike complex multiplication-based weights, the Tsetlin automata “weights” are simple logic signals that control the configuration of input literals in the clauses.
Yakovlev, together with his Literal Labs co-founder Rishad Shafik and their team at Newcastle University, have been working on hardware and software implementation of Tsetlin Machines adding new data representation techniques (e.g. booleanisation and binarisation), parallelisation and compression methods based on indexing input literals, tiled architectures, and hardware-software codesign of ML learning systems. These combinations of techniques amplify the TM advantages by orders of magnitude, including up to 1,000X faster inferencing and orders of magnitude more energy savings than neural networks. They also brought a new level of understanding of the dynamics of machine learning processes by visualising the learning processes and identifying important analytical characteristics of hyperparameters of TMs, such as thresholds on clause voting and feedback activation.
Interested to learn how Tsetlin machines stack up against other AI technologies, including neural networks? Find out by exploring our models' benchmarks. Our you can learn more about how elements of our technology have been developed by exploring our team's published research papers.