As detailed in our prior post, a fair marketplace for expertise must align the payout to a true expert with the material gains from purchasing their expertise. But how do you know that an entity introduced as an expert is (a) truly an expert, and (b) sharing their expertise for your benefit rather than to manipulate or misinform?
Amulet token enabled marketplaces apply the ideas underlying Nakamoto consensus mechanisms to transactions of expertise. This is accomplished by requiring experts to post a stake of their own wealth as a prerequisite for responding to a request for a prediction (RFP). This stake is added to the stakes posted by other responding experts, and is added to the maximal bid placed by an investor who stands to profit from the expert responses. Upon verification of the responses for accuracy, the best performing respondents are rewarded for their expertise via a payout from the sum of respondents' stakes and maximal investor bid. If the accuracy of a response cannot be verified, then the expert who submitted this response receives no payout and loses their posted stake. Amulet token enabled marketplaces therefore create a trusted environment without authoritative, or a priori trusted participants through the same economic incentives which public and permissionless blockchains use to create trust.
How could these marketplaces fail?
What are the most vulnerable attack surfaces?
Diluting The Value Of The Marketplace For Investors
One particularly damaging attack vector would be flooding the marketplace with responses to requests for predictions which do not yield any value for the investors. Recall that the marketplace requires experts to post stakes before they can issue a response, and that incorrect responses do not receive any payout. A flood of incorrect responses would clearly bankrupt the attacker. But what about a flood of random guesses?
A random guess on a multiple choice question with n possible responses has, by definition, a 1/n chance of being correct. Could a flood of random guesses preserve the budget of the attacker while enabling them to devalue the marketplace for investors?
This depends on the implementation of the reward payout function. By constructing a stochastic process known as a strict super-martingale, the Amulet team designed and implemented a reward payout function which guarantees that a flood of random guesses results in a strictly decreasing budget for an attacker. We have simulated a series of 100 requests for expertise on the Amulet token enabled marketplace. Each of these 100 requests is responded to by three types of experts. One which randomly guesses, one which is very unlikely to respond correctly, and one which is very likely to respond correctly. Experts within each type are consistent in their response strategy across the 100 consecutive requests. After the reward payout for each request for prediction, the budgets for all experts within one of the three types is averaged. This within responder type average is the standard estimate of expectation. The results of the simulation clearly show that a randomly guessing expert will be bankrupted just as quickly as an expert who is very unlikely to respond correctly.
The implementation of the simulation is open-sourced under the Apache License v2 on the official Amulet GitHub. The details of the Amulet reward payout function should be studied in chapter 2, section 4 of the Amulet white paper. Our white paper provides a rigorous proof that the super-martingale property holds true for our reward payout function. This ensures that an attempt to devalue an Amulet token enabled marketplace for expertise by randomly guessing will result in bankruptcy.
Diluting The Value Of The Marketplace For Experts
Another particularly damaging attack vector would be colluding with other experts to reduce the reward payout for non-collusive experts. Sybil attacks are a special case of this attack, as Sybil identities can be viewed as entirely compliant partners in collusion. There are three possible variants of this attack.
Clearly no colluding parties would uniformly agree to respond incorrectly to requests for predictions (RFPs) using their expertise. This would result in such colluding parties to forfeit their posted stakes, which in turn would only increase the reward payout for correctly responding experts. So what prevents all colluding parties from uniformly responding correctly to RFPs?
Amulet token enabled marketplaces all employ a relative rank for experts. This means that an expert's percentile rank is computed from the expected utility of acting on their predictions to RFPs. This ensures that an increased ratio of correct responses to a sequence of RFPs will reduce the rank of the colluding parties. This is because more correct responders means fewer poor performing responders. Think of a classroom of a fixed number of students. If some of them group together to share correct answers, then a smaller percentage of the classroom is wrong, so the relative rank of the colluders decreases.
The third possibility is the colluders decide among themselves who will respond with the correct prediction and who will purposefully respond in error. As detailed in chapter 2 section 5 of our white paper, the reward payout function scales the payout to any correct responder by their prior success rate in providing correct predictions. This means that a colluding party must stick with a fixed identity for their choice of correct responder if they wish to maximize their payout. Else randomly rotating correct responders will result in minimal payout each and every time.
But keeping a consistent correct responder is the same behavior as a true, high skilled expert. This therefore cannot devalue the marketplace for investors. Though we concede that it can result in syndicates of experts (eg., professional traders organizing into companies).