Back

Consensus learning: harnessing blockchain for better AI

Flare research’s latest paper introduces a new approach to artificial intelligence (AI), where combining AI with blockchain leads to safer and more accurate AI.

Consensus learning (CL) enables collaborative AI across a spectrum of applications, enabling the development of more accurate and robust AI models. CL is particularly suitable for AI integration in data-sensitive sectors, such as healthcare or finance, thereby improving decision-making processes and enhancing overall operational performance and efficiency, which in turn can result in a lower cost for services to the end consumer. This can result in significantly better patient care outcomes, more accurate financial analysis or enhanced fraud detection, among other benefits. In contrast to most existing implementations of AI and blockchain, which enable access to centralized machine learning (ML) through the blockchain, CL leverages blockchain to create decentralized AI models.

Motivations

In recent years, there has been a growing emphasis on distributed environments, where data and computational resources are spread across multiple devices. This shift is prompted by the requirements of modern foundation models, such as large language models and computer vision models, which demand substantial amounts of data for processing. In this distributed but still centralized setting, decentralization emerges as a fundamental need, driven by several key motivations.

Centralized methods present inherent risks by relying on a single trusted party, which confine their use mainly to single enterprise settings and constrain their broader adoption. Moreover, these architectures not only increase the vulnerability to potential attacks or system failures, but also raise concerns about data privacy and security. Conversely, decentralized methods present a distinct advantage: they enable users to develop personalized local models, tailored to their specific requirements and preferences, whereas centralized approaches often lack the flexibility required for such customization. Amidst these limitations, consensus learning emerges as a decentralized ML solution offering greater resilience, privacy, and adaptability, while mitigating the inherent risks associated with centralization.

Benefits of consensus learning

Consensus protocols are essential to the security of decentralized ledgers and protect blockchain networks from malicious attacks. Harnessing consensus mechanisms for AI has many benefits, among which we highlight the following:

  • Increased performance. CL methods benefit from the data of each of its ensemble contributors, reducing bias and enhancing models’ ability to generalize on unseen data. CL can also lead to more accurate AI as compared to centralized methods, primarily due to blockchain’s ability to incentivise collaboration, leading to a greater proficiency in combining diverse insights from diverse models. This is achieved via multiple local aggregations, where each participant assesses the predictions of neighboring models and integrates them for better accuracy. This is one of the first instances where AI can gain significant advantages from blockchain integration.
  • Security. In the presence of malicious actors attempting to introduce hidden objectives, the integrity of CL models remains uncompromised due to the built-in safety features of consensus mechanisms. This ensures that AI systems refrain from generating deliberate harmful predictions or unintentional inaccuracies, both of which are hallmarks of malicious AI. Consequently, CL addresses a major concern within the AI community, protecting AI from exploitation for detrimental purposes. By upholding the integrity of the collaborative learning process, CL instills greater trust and confidence in AI systems, paving the way for their responsible and ethical deployment.
  • Data privacy. In CL, neither the underlying data of network participants nor their individual models are shared at any point. In fact, there are no malicious attacks on the network capable of compromising data confidentiality, since data remains stored locally. Preserving privacy not only encourages collaboration, but also preserves competitiveness. In this regard, CL enables data monetization through AI, especially for sensitive or commercial data such as healthcare, overcoming previous challenges encountered in centralized environments.
  • Full decentralization. Data and computation resources are spread across a network of participants, which communicate without relying on a single central server. The necessity of decentralization is prominently visible in modern ML applications due to the demand for vast amounts of resources and the increasing complexity of ML models. Decentralized ML emerges as a more fitting solution for preserving data privacy and ensuring security.
  • Efficiency. The learning process has low latency and requires much less computation time, energy and resources as compared to other state-of-the-art decentralized ML methods. This makes CL particularly suitable for real-time applications, where quick decision-making and efficient resource utilization are paramount.

How it works

Consensus learning enhances ensemble methods through a communication phase, where participants share their (model) outputs until agreement is reached. CL is a two-stage process that can be implemented as follows:

  • Individual learning phase. Each network participant develops their own model, based on their private data and other publicly available data. This can range from building a model from scratch, to using large pre-trained models and fine-tuning them to their needs. Crucially, a participant will never be required to share sensitive information about their data or model. Once the training is complete, participants will prepare their initial predictions for a testing dataset – this can be a dataset disclosed through a smart contract, or, alternatively, participants may propose new testing data points through a Proof-of-Stake mechanism, for instance.
  • Communication phase. Participants transmit their initial predictions within the network according to a consensus/gossip protocol. During these exchanges, participants continuously update their predictions to reflect the assessments of the other network participants, as well as the confidence in their own predictions. Additionally, a participant can monitor the quality of predictions received from the rest of the network, and use this to improve decision-making. At the end of this phase, participants come to agreement (“consensus”) on the decision deemed optimal given the information available within the network. This phase is then repeated for any new data inputs.

 

Figure caption: An example of how CL works for a binary classification task. (a) In the first stage, participants develop their own models, based on their own data, and possibly other data willingly shared by other participants. At the end of this phase, each model determines an initial prediction (represented by the hollow circles) for any inputs of the testing dataset. (b) In the communication phase, participants exchange and update their initial predictions, eventually reaching consensus on a single output (represented by the filled circles). This phase is repeated for any new data inputs.

Strictly speaking, the algorithm described above refers to a supervised ML scenario – specifically, this is a setting where the training datasets are already labeled, and where the algorithm makes predictions for the labels of new, unseen, testing data. However, CL can also be adapted to self-supervised or unsupervised ML problems, where participants only have access to partly or completely unlabelled data. The objectives of these methods are slightly different, requiring participants to employ different techniques during the individual learning phase. Nonetheless, the communication phase would proceed in a similar manner to the description provided above.

How consensus learning sets itself apart

The idea behind CL is that of efficiently combining knowledge (in the form of AI models) from multiple sources without sharing any sensitive or valuable information or intellectual property. This approach is designed to protect confidential information, while also ensuring resilience against potential risks posed by malicious entities. CL builds on the highly successful ensemble learning paradigm, which provides powerful techniques for merging multiple models into a single one. Ensemble methods rely on the principle of “wisdom of crowds”, leveraging the collective knowledge of a crowd to surpass that of any single member.

Several blockchain implementations of AI services have emerged in recent years, showcasing innovative approaches to integrating AI with decentralized networks. For instance, Bittensor facilitates AI inferences (model output) within its domain-specific subnets, by weighting the predictions of the “miners” through a game-theoretic mechanism. FLock.io offers a platform for federated learning (a different type of distributed learning), albeit with a centralized aggregator, utilizing the blockchain for validating model updates and rewarding participants. Another example is Ritual, which effectively operates a marketplace for ML models through its Infernet protocol, where requests for running a specific model are sent to the model owner.

CL sets itself apart through its distinct aggregation method, wherein the predictions of the individual models pass through a secure gossip protocol for the purpose of reaching agreement. As such, CL leverages blockchain to create decentralized AI models, whereas existing implementations enable access to centralized ML through the blockchain. The focus is on enabling more accurate and secure AI through collaboration, whilst at the same time allowing entities that hold private, often sensitive data to join the system, while ensuring the confidentiality of their data.

In summary

Consensus learning presents a groundbreaking opportunity to implement machine learning directly on decentralized ledgers like blockchains. With this initiative, we witness the emergence of a novel approach where blockchain technology can fundamentally improve existing AI tools. This opens up exciting possibilities for innovation and secure collaboration in traditionally data-sensitive sectors, such as healthcare, setting the stage for the adoption of collaborative ML techniques. Moreover, the resilience of CL methods in the face of malicious factors fosters greater trust in AI systems, fortifying their reliability and integrity.