Daily techtalk: Number of datacenter networks

in technology •  7 years ago 

I was thinking in the past a lot about this topic. In general, the basic principle which everyone follows, is that : we should minimize the number of protocols, vendors, to have a more simple architecture, which more unique support.

cables.jpg
*image source

If we start to increase number of components, the number of vendors, and training requirements also increase. From one view this is a problem, but if we have a better performing architecture, than we can live with this problem.

I'm always monitoring the developments of the HPC sector, because I believe in that the developments which they provide will affect the future the computing. For me it is the same like Formula 1, in the automotive industry.

This is why I was happy, when I first have hands on experience with an Infiniband network. The performance, which it gave to us, was much more superior than anything I have seen before. With VPI products of Mellanox, it gave us much more flexibility, to balance between ethernet network and Infiniband network without any new procurement. For this reason I like to use dual port Infiniband adapters. If Infiniband fine for the architecture, then we can balance on four ports in one server. With this architecture, we can get 216Gb network bandwidth between servers.

With this level of network bandwidth, and with the exceptionally low latency we can eliminate all possible network bottleneck. This is the main concept of the systems which I develop:

Try to achieve maximum performance from each component,
so you know what components will not be affected
when investigating performance problems.

With this system design, we can free the operation from these unnecessary tasks, and they can work on more important duties.

The other key area, which I see for Infiniband: is the storage area network. The traditional FC interconnect in the SAN environment served well, in the past era, but with the increasing demand for low latency, and high throughput, we need to face to new directions.

fibre.png

*image source

We can see that the 32Gb FC introduced in 2014, but on the market most of the vendors still use 8Gb because of the price of the equipments. For larger scale implementations pricing is a factor which limits the product cools. From this perspective, many vendors started to move into the Ethernet based implementation, like iSCSI or FCoE. This protocol gives the necessarily needed throughput, but on latency, and CPU usage, it is lightyears from FC.

From my side, the answer is more on the Infiniband again, which gives us the iSCSI iSER extension, and SRP. Both implementations have the advantage of the low latency, and low CPU resource usage, without Bandwidth limitation. If we compare the developments of the FC interface to the Infiniband, we can see the difference:

infiniband-roadmap-2016.jpg
*image source

With the infiniband we can get 100Gb today per port. If we use dual ports, and two cards for proper HA, we can reach 400Gb per hosts.

This extreme bandwidth enough for both storage functions, and machine to machine communications. With higher bandwidth we need a less number of adapters.

Of course the implementation of this kind of new backbone, not come from one day to another. We still have old storage equipments, and SAN network. To combine this kind of networks I have seen a good equipment in Oracle portfolio. It's called Fabric Interconnect. If anyone interested more information can be found in the datasheet of the product.

With the help of this product we can connect our new Inifiniband architecture to the old FC SAN. I think, with this solution, we can migrate storages, hosts with more ease. For the transition, these products are a must.

For the ethernet products I would use only them for end-user communication. With proper Network monitoring, and network architecture planning, we can decrease the costs of the ethernet network, because if we think what bandwidth needed for one user's communication, in most of the cases gigabit is more than enough.

With this information, I would plan for two networks, one server datacenter network, and one for the end user and outside communication. For the datacenter I suggest Infiniband, and for the rest the Ethernet. I think with time, the FC infrastructure will be history.

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!
Sort Order:  

intersting informations <3 thanx

iSCSI and NFS will continue to dominate the market because of their simplicity, interoperability, and the momentum of the market. FC and Infiniband will continue to exist for a long time in the high end niche.