Daily techtalk... Centralized or Decentralized the future computing architecture

in technology •  7 years ago 

Today, many questions around this topic, computing system architecture must step away from the computing architectures based on the nearly sixty years old Neumann concept. There is a universal consensus on the fact, that the future architecture will be memory driven, not processor driven. The only question is that, what implementation method could be more successful. In this article I try to collect the main two concepts which I currently see really straightforward.

Fotolia_42626474_Subscription_Monthly_M.jpg
*image source

Method 1: Memory centric architecture

This week, I had the opportunity to attend to the HPE Discover event, where I saw the concept of the Machine concept. This concept centralizes memory to a fault tolerant, highly available layer, which is available to all processing unit, it doesn't matter which technology we want to use (GPU, CPU, FPGA, etc..) Of course each of the computing units have local memory, but every data is accessed online, from the collective persistent memory. You can find the presentation below:

Development is really an ongoing progress. Last year they presented a software simulated model, now they have a working hardware, which is composed of currently available parts, but every component can be changed when the technology will be generally available.

Method 2: Memory decentralized architecture

The other method is the IBM way, which the company started in early 2011 with the brain simulation SyNAPSE architecture. They built also first software simulation to their concept, and later they created their processor which named Truenorth. These processors can be used to build clusters from them. The main advantage of the architecture is the low power consumption, and high processing speed. As the whole world moving in the direction of Machine Learning, and Machine Intelligence, I think we need to optimize the most of the systems in these areas.

With this concept we are able to run billions of threads parallel. This is an extremely massive parallel computing. What I really appreciate in this concept is it's similarity to the human brain. A leading researcher from IBM presented a publication recently about the usage of this architecture for deep learning.

Link to the publication

My vision regarding these architectures

My main concern with HP concept, is the performance of the link to the memory. They also presented "Edge computing" concept, where everything should be processed at the edge before storing data to another place. If we combine the two concepts, processing has to be settled near to the memory. But this goes against the overall basic concepts. The main problem will always be the parallelism, and the bandwidth of the interconnects.

With the IBM method I can be familiar, but it needs to evolve in several levels, to become useful for future needs. Currently it's mainly used by DARPA, and recently there have been some publication regarding SAMSUNG use.

The only problem with this design, is the increase of the number of layers between synapses, but this similar to the human brain.

In my vision the solution might in the interconnect layer. We need to connect all synapses, which have memory, and processing unit to a giant crossbar, where connected synapses communicate with the same color of light. With this possible solution, we are able to connect a large number of units with the same cabling dynamically. Not only the diversity of connecting possibilities is the main goal, but the dynamic cluster reconfiguration between the synapse units.

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!
Sort Order:  

nice

Thanks for visit!

I prefer the Decentralized option, it will help with DR events and for redundancy. Also another point that would be interesting is how easy is to scale out this architectures for high demands services.

I really like your posts and keeping an eye on them.
All the best

Thanks! I also believe in decentralized concept. I had a big fight with HPE engineer who made the concept of the Machine. Finally they realised I have the truth. My question related to the DR ? They didn't even think about it. :D

Well... for sure this person will defend his project (maybe the one he had been working since years) and also this kind of person have the support of a huge company behind that need sales... But decentralized architecture is the best option today. And really bad if they are not thinking about DR. So far HPE is a big corpo with amazing products for DataCenters, but now that we have the access to the BlockChain and we can run decentralize Apps on it, a good example is EOS, with this project that use the blockchain (ETH network) we can run applications with the power of all this network... this is AWESOME! and there is not datacenter that can be compare with such a scalable, decentralized and redundant architecture. From other side I dont know the price of EOS, but from infrasctructure level I think is one of the most powerful.

exciting! thanks to HP and IBM. thanks for sharing @cuttie1979

Bonjour à vous, j'adore votre contenu sur votre blog, cet article est très passionnant comme d'habitude je n'imagine pas le temps que ta du prendre pour le faire. Je suis très triste que ton travail n'est pas apprécié à sa juste valeur car tu mérites 100 fois plus de popularité et je ressens que tu fais ça par bonheur et moi j'ai énormément d'administration, je te souhaite une très bonne journée et je suis impatient pour ton prochain article. Salut l'ami et porte toi bien ;)

Merci pour votre visite. Je ferai de mon mieux pour continuer à vous divertir!

Fewer servers in the future and less energy consumption. This is the purpose of its inventions

This is a topic of my future post :D Thanks :D

I think decentralisation is the way to go

I think also. I believe in the scale for small energy efficient devices. This is why I really like HPE Moonshot server with the M800 boards.