Improving Oracles with Peer Prediction/Consistency

The state-of-the-art today for oracles on Ethereum almost exclusively follow some combination of the same 4-step formula outlined by…
Improving Oracles with Peer Prediction/Consistency

The state-of-the-art today for oracles on Ethereum almost exclusively follow some combination of the same 4-step formula outlined by Vitalik:

  1. Initial reporting
  2. Escalation
  3. Coin vote
  4. Fork

Kleros and Augur both follow this model almost exactly, and some other projects have oracles that just use some subset of the aforementioned steps, like having an honest majority set of parties do an initial reporting, and using a coin vote if there’s a dispute.

In this model, each step costs more to execute, but incentives are structured so that there’s minimal “overflow” to the next step. For example, on Augur, during escalation, each side of the dispute has to put up larger and larger deposits, and the coin vote is triggered when the amount of total deposits exceed some global threshold. If the coin vote is resolved, individuals who bet on the incorrect outcome lose their deposits. By having these incentive structures, we reduce the number of disputes that have to go to a coin vote.

This 4-step process is secured by the final step: if enough people disagree on an outcome, the system forks and anyone can choose which side they think is the truth. Naturally, this is (socially) expensive, since all underlying infrastructure like smart contracts and exchanges that rely on these outcomes will stop working temporarily until the migration process is done. This step is actually crucially important. In fact, if all other steps were omitted, this step alone is a sufficient condition for building an incentive-compatible (but expensive) oracle.

But is forking a necessary condition for oracles?

Part of the reason why forking is assumed to be necessary is that oracle data is subjective. Unlike in-protocol transactions that can be objectively verified to be valid according to a set of rules, subjective data isn’t possible to verify, and so we need to rely on crypto-economic design. Forking allows us to sidestep this problem because ultimately, forking means that the protocol doesn’t decide — the decision can be left to the rest of the world.

A simple crypto-economic principle (without forking) that makes oracles crypto-economically secure is to ensure that the cost of corrupting an oracle exceeds the benefit that can be gained by manipulating the output of the oracle. UMA implements this by having oracles put up a token deposit, and then tracking the amount of value that’s “at-risk”. Once the total value at risk is evaluated, automatic transactions are made that (synchronously) influence token price such that the token deposit exceeds the value at risk, before the oracle outcome is produced.

There is a related and well-explored field in game theory that also does a very good job of incentivising honest reporting called peer-prediction/peer-consistency. The main underlying idea is that every individual who is submitting a report has access to their observed truth — and that is an additional data point that no one else has access to. By designing a payout scheme that rewards reports that contribute such data, we can incentivise individuals to report honestly.

One such construction of a payout scheme is the Bayesian Truth Serum mechanism, which rewards participants for the most surprisingly common report.

To give an example, let’s say Red Bull wanted to know the distribution of the number of cans of energy drinks consumed yearly per person, and they decide to ask 100 randomly selected people to report how many cans of energy drinks they drink (eg. 0–100, 100–200, 200+). You have been chosen as one of these lucky participants in this yearly survey, and of course, there’s a survey reward.

However, you aren’t paid for just reporting the number of energy drinks you’ve had in the past year. You also need to report what you think the expected distribution of the answers are going to look like, and you are rewarded more if your reported number turns out to be exceptionally high compared to the expected distribution given by everyone.

For example, if everyone in the survey expects only 5% of people to drink 200+ energy drinks a year, and it turns out that more than 5 people in the survey, including you, reported that they drank 200+ energy drinks a year, you get rewarded more. This works by exploiting the idea that your own observations are a “sample of one”, and that since you expect that your best estimate of the popularity of your choice is high compared to other’s estimates, it stands to reason that the popularity of this choice will be underestimated by others. And so, a truthful report is also the report that has the best chance of being surprisingly common.

Unfortunately, it may not always be reasonable to ask individuals to report on the expected distributions of responses of their peers. In the literature, this is referred to as a non-minimal mechanism, in the sense that participants are additionally asked to forecast their peers’ answers.

Improved constructions are being developed for eliciting honest reporting, and recent publications like Infochain even add in economic guarantees that are resistant to external bribes (parametric to the number of peers) that operate within the constraints of an Ethereum smart contract. These constructions are likely to become more popular as their guarantees become more formalised and as these claims get experimentally verified on testnets, but for now, they definitely provide improvements in the initial reporting of the 4-step oracle process.

Join the Torus Community

We would be releasing more developer updates on our suite of products, join our Official Telegram Group or follow us on Twitter for the latest news!