RE: Taking the "Center" From "Data Center"

You are viewing a single comment's thread from:

Taking the "Center" From "Data Center"

in programming •  8 years ago 

I don't see anything here about encrypting the stored data. IMHO the hosts should never know what it is they are storing, and no host should hold an entire set of packets that comprise a package.

Data storage should also be redundant. If one host goes down that should not prevent a distributed site from working.

So when a client requests a web page, the client puts out a request for data based on a GUID. The various hosts send what packets they have, just like in a torrent system, and the client builds and then decrypts the package. Then, if more processing power is needed, they can build packets of computation that can be sent out to processing hosts who wil only have a subset of the problem being computed - maybe using some sort of zero knowledge proof-like system, so that no hosts actually know what is being done.

When I, as a site builder, set up a site I post the GUID to a blockchain with very little metadata. I then toss the site's data out to the winds where various hosts each store a few of the packets. I shouldn't even know who/where they are being stored.

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!
Sort Order:  

Thank you for reading and taking the time to respond!

Client-side symmetric encryption was briefly mentioned as the recommended default. However, encryption is not required to use the system and in some instances may not be desired, such as a publicly-available web page stored in the system. Setting the encryption type is something users of clients and hosts are able to define as an option. Additionally, a client could be set to refuse to deal with hosts who accept data which hasn't been encrypted, and vice-versa.

To redundancy and anonymity, I often take things for granted even though I shouldn't; redundancy and anonymity are two reasons splitting segments and scripts into smaller blocks is done. Also, many features such as verification of computations depend on more than one host performing the identical operations on identical data. Any individual or group who wants to invest the time and resources to discover where packets are stored would be able to because of how the internet works, but the average client wouldn't know who or where data is being stored or processed.

Fragments are meant to be small in size, from 8 bytes to megabytes, with the range set by clients and hosts. An embedded client or host may only handle 256 bytes at a time while a desktop-based system could only deal in megabytes. This also opens the door for aggregator services to spring up which accept hundreds or thousands of minuscule transaction requests from embedded/IoT devices, assemble them into large fragments megabytes in size, and send them off to hosts for storage.

This system isn't meant to hide information from prying eyes, but to provide a lightweight blockchain-backed distributed processing and storage mechanism; essentially to prevent censorship but also useful for many other tasks.

If anything else sparks your interest, suggestions, or general comments please feel free to respond!