Blockchain consensus algorithms for Dummies

in consensus •  6 years ago  (edited)

Distributed computing has made things that were unthinkable before a reality, which is, almost everything we accompany by a word "decentralized" now. While the possible implementations of this technology are virtually limitless, it still has some flaws, limitations and specific traits.
Some of them, like SPOF, are easily addressable, others - not. One of the hardest yet most important problems developers face during the process of designing the distributed computing system is called a consensus problem: how to achieve a reliability within a network of multiple unreliable nodes? The answer's neither easy nor unambiguous, hence consensus algorithms are created.

shutterstock_132546254.jpg

In the world of cryptocurrencies, consensus algorithms exist to prevent double spending issue by guaranteeing that agreement on a particular data value is achieved among the mutually untrusted, uncoordinated parties. It also ensures that individuals or groups with significant computing power cannot derail the chain thus protecting it from constant forking. Despite being of such importance to the success of any cryptocurrency, consensus algorithms are often overlooked nonetheless.
So let's get a bit more into how the consensus algorithms function and why are there so many of them.

There's no such thing as "the best consensus algorithm", at least not without a particular context and even then it's impossible to say so with a certainty. Once again, there's no ideal consensus algorithm, but each one possesses some strengths and, consequently, weaknesses.

The first ever consensus algorithm to be used in the blockchain has been introduced to the world by Satoshi Nakamoto. It's called Proof-of-Work, or simply mining, being the first one to come it's also the most wide-spread, at least for the time being. It's implemented in the most of existing cryptocurrencies, including Bitcoin, Ethereum (for now, that is), Litecoin, Dogecoin e.t.c. It's this algorithm we have to thank for a massive rise in the power consumption throughout the world. Hard, useless problems are solved by miners in order to create blocks. In POW the longest blockchain wins and the others are discarded, so, assuming that the majority of miners works on the same chain, it'll be growing faster and thus be longer and most trustworthy. It's time-proven to be functional, but it's time we begin considering it a legacy. It's safe, as long as at least 51% of work done by miners is honest, but even this doesn't give us the right to continue implementing it in new projects. Not when the ecological impact is that great and not when there are so many alternatives.

The second most widespread consensus algorithm in the blockchain is called Proof-Of-Stake. Decred and Peercoin are using it and so will Ethereum soon enough. In POS blocks aren't created by miners but voted for by minters "betting" their stakes. Should the fork happen miners spend their tokens to vote on the correct one, and those who voted on the wrong fork (least supported one) would "forfeit their stake" in the correct fork. The most common argument against PoS is so-called Nothing at stake problem.

As supporting a fork costs almost no computational power, validators could vote for both sides of each fork that happens, thus eliminating the risk for themselves.

To say a system possesses Byzantine Fault Tolerance means a system can function properly in the event of a failure caused by the classic problem in the distributed computing often represented by Byzantine generals who had to decide in a unison whether or not to attack and would face the dire consequences otherwise. The BFT is very technical and quite complicated; solid explanation could be found here, but, in general, BFT consensus algorithms allow validators (generals) to manage the state of the chain and exchange messages with one another to coordinate their work and to ensure honesty.
Several consensus protocols based on BFT exist, main ones being Practical Byzantine Fault Tolerance (PBFT), and Federated Byzantine Agreement (FBA)**.
The first one has less than 20 pre-selected generals and runs incredibly efficient. The transaction throughput is high, but this model is centralized. It's currently used by hyperledger.

FBA is used by Stellar and Ripple. The main idea of it is that every Byzantine general is responsible for their own chain and sort messages as they come to define truth. In Ripple the generals are pre-selected and in Stellar anyone could be a validator, so you can choose whom to trust.

BFT has small transaction fees and is easily scalable but introduces a bit of centralization.

EOS and Steemit implement DPoS, which, despite the name, greatly differs from DOS. In it, token holders vote for the delegates, rather than the block itself, to do the validation on their behalf. This makes Delegated Proof-of-Stake centralized to a large degree, though it remains decentralized in a certain way, as all in the network participate in the selection of which nodes validate transactions. As all the decisions are carried out by a small group, it greatly speeds up transactions, decreases transaction costs and block time (Eos aims for sub 1 sec; compare that to 10 mins in Bitcoin!) and allows for easy scaling.

The last widely-spread consensus algorithm is DAGs - Directed Acyclic Graphs. DAGs don't use the blockchain data structure and handles transactions mostly unsynchronous. It allows for a theoretically countably infinite amount of transactions per second. Nonetheless, DAGs properties are highly dependant on the specific implementation. Tangle (used by Iota), Hashgraph, and Block-Lattice (used by Nano) are the three implementations that have received the most attention recently, though not all of it has been positive

For the consensus algorithm to be fitted for use in the public network it must be resistant to attacks, even though many consensus algorithms possess higher raw power than, say, PoW. This is a developing field, and more advanced and effective technologies are likely to emerge in the upcoming years. Some consensus algorithms are barely tested while other are only theories that may never have an actual implementation. Some less wide-spread algorithms include Proof-of-Authority, Proof-of-Weight (Proof-of-Spacetime, Proof-of-Reputation, e.t.c. (Proof-of-Anything, really)), Proof-of-Activity(PoW+PoS), Proof-of-Burn, Proof-of-Capacity(Proof-of-Storage; Proof-of-Space), and Intel's Proof -of-Elapsed time

Currently, we are making tradeoffs between scalability and degree to which the systems are centralized. With the introduction of second-layer networks the scalability portion of the equation may come off to the second plan, so it'll certainly be interesting to observe the changes that will come to this field and what technologies will emerge in the coming years.

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!