The Internet Before Decentralization
The internet is a human construction that has its own languages, and these languages have their own rules and protocols allowing it to function properly. Previous to the development of these languages computers were isolated machines with no way to communicate with each other. By creating a structure of interconnections between computers and using these communication protocols, computers are able to interact with each other. This interconnected structure is called system architecture and it makes the internet possible. There are a number of different types of architecture but the two most prevalent are client-server and peer-to-peer networks. Of these two, the client-server model dominates the landscape and uses a language called Hypertext Transfer Protocol (HTTP) to communicate. Data is stored in centralized servers that are then accessed using location-based addresses utilizing HTTP. This centralized server model and HTTP are very effective for certain actions like manipulating text and image files and creating websites; when dealing with issues of speed, latency and throughput, centralization has proven to be a useful model. The client-server model is also great at loading websites and handling text and images, aspects that once comprised the majority of internet traffic. Because of these strengths HTTP has dominated the landscape. However, HTTP is not perfect. Specifically, it is not suited to handle the transfer of large data files, like audio and video, which is why the adoption of peer-to-peer networks gained popularity. There is also the issue of server security. Having a consolidated organization means that the risk of data breaches and hacks are huge: all of the data for a general population is stored on a handful of servers under a central control. If bad actors gain access to these servers they can glean, manipulate and delete huge swaths of information.