Writing about SCN, DAGs, CDA, etc.

in blockchain •  6 years ago 

Ok, there's this task of formulating a set of scientifically sensible questions about this scientifically heavy project. What do I know so far? The system uses a modified version of Directed Acyclic Graphs (DAGs) to implement a concept of service based internet communication. The idea is that instead of a domain name or IP, I request a specific service by its name and then the service provider decides how to better route my request. It makes sense, for example, when I address some video streaming service, I request both the web page and the source of the high-quality video, using the same domain name. The DNS server converts my request into the same IP address without taking into consideration that video requires higher bandwidth, and probably this second request should be routed to a different server or group of servers, specifically designed to withstand high-intensity traffic. Although I simplify here: a video source requested from an adequate server is normally recognized by the resource provider as video stream request and directed to the respective high-load infrastructure, but in a more general sense the standard DNS/IP request scheme creates a certain bottleneck. For example, when I request Google the DNS server simply gives me its IP address or the list of available IP addresses. It doesn't take into consideration my location relative to that IP address that can be physically located in the other part of the world. Neither does my computer, it just begins trying to access IP addresses from the received list. So my first request would be directed to the server located on the other side of the planet, while I might have a Google server located nearby, providing the same service. Because Google apparently has a distributed infrastructure and all that. It can later redirect my further requests based on my own location but, once again, it would require altering domain name, for example, from google.com to google.ru, and it's really cumbersome and non-flexible system, to think about it. What if I'm also located in some spot from which, due to the screwed up network infrastructure, I can reach only one specific server. The system of accessing internet services using domain names technically cannot adequately solve this problem. Because domain name resolution services aren't designed to analyze the nuances of network infrastructure. They don't care about it. Similarly, if the Internet company provides different services, requiring different infrastructure, equipment etc.- e.g. video streaming service and cloud backup service - ideally requests should be routed accordingly as well. The standard DNS to IP resolution procedure once again creates a sort of bottleneck, directing all the different requests to a single entry point. On the contrary, the system, in which respective resources are addressed not by domain name but by a specific service name, ideally, would create a situation when users' requests would be routed directly to the relevant segment of the service provider's infrastructure, also taking into consideration user's location, current level of system overload, etc. Who is going to conduct this sophisticated procedure of service request resolution is another question. Probably it's one of the questions that would make sense to formulate and address to the project's team. The project claims to use this new paradigm of addressing Internet resources by service name, but how the service itself is going to be accessed in the first place? If it's going to be some central point addressed by the domain name then, once again, we are facing the problem of the single entry point bottleneck. Here what also comes to mind are decentralized protocols, like BitTorrent that don't use a single entry point or a single connection. Torrents work quite differently from the traditional client-server scheme where I address a specific server by its domain name, and an intermediary in the form of DNS server helps me to determine this server's exact location and address. In torrents a single file can be distributed across multiple random hosts, none of which can be predicted in advance. The situation constantly changes with hosts joining and leaving the network all the time, so, naturally, there's no single centralized entity that can determine, given requested file name, where exactly it's located. Instead, when I request a file from the torrent network I receive a list of hosts who know stuff about hosts who know stuff about... and so on. Like, all the participants of the torrent exchange keep the information about other participants they got in contact with. So when I address any single participant of the torrent exchange (initially from the list of hosts in the torrent file) I get updated on the information about hosts, with which this host currently communicates. Subsequently, after getting in touch with those hosts, I receive the information about another bunch of hosts, with which those hosts communicate, and so on. So, eventually, I gather enough information to form a comprehensive picture, regarding the situation with the requested file. What I want to emphasize is that that information comes not from a single dedicated server, which is the case with DNS, but collected from the exchange participants themselves, each of whom keeps a part of the whole picture relevant to his own activities. When I join the torrent network, i.e. begin to download the file, I similarly become a source of information, allowing new participants to figure out whom else they need to contact to get the desired resource. In this sense, the torrent network doesn't have a single entry point. When I request a certain resource my request doesn't imply anything about network topology that can change all the time. In some way, the concept is similar to the idea of addressing Internet resources by service name.

Another aspect of the project in question is its use of a modified version of Directed Acyclic Graphs (DAGs), which they somewhat emphasize. As I understand DAG is a very abstract and broad mathematical concept that can be applied to an endless number of scenarios: from building hereditary trees to determining how in a system, including a number of variables depending on one another, changing of one variable would affect other variables and so on. One illustrative example of the practical implementation of DAGs is Bayesian networks.


It's cloudy, and the weather is getting chilly, which creates better conditions for the brain.

So, anyway, Bayesian networks. In a nutshell, Bayesian networks are designed to calculate the probabilities of certain things or events, taking into consideration other things and events that can make them more or less likely. For example, if I take a walk the probability of me getting hit by a car is significantly higher than if I stay at home. On the other hand, if I got hit by a car the probability that I left home this day is almost a hundred percent. Unless the car entered my house, climbed to the second floor, broke into my apartment and hit me. The probability of such an event is infinitely small but it still exists. Here I'd like to emphasize that Bayesian networks don't tell anything for sure, but if the probability of a certain outcome is high enough it can be sufficient for making decisions. In other words, Bayesian networks model the systems that deal with causation, where certain things cause other things, which in turn cause other things, and so on. Taking into consideration that one cause might have a number of different effects with a different level of probability of each. This is basically what the word Bayesian is all about: causes and effects. In the context of my current research Bayesian network is a particular implementation of a broader mathematical concept of Directed Acyclic Graphs (DAGs) The DAG is a pretty simple structure, basically, it's a set of nodes connected with arrows. I can jump from one node to another, following the arrows, and this is basically it. Two key characteristics that distinguish DAGs from other types of graphs are: First, DAGs cannot contain cycles, in other words, if I walk across the nodes of DAG from a certain point there's no way I can return to that point using another route. This characteristic is essential in a situation when DAGs are applied to tracking dependencies. Take for example an Excel sheet. There one variable can depend on another, which in turn can depend on another, and so on. When I change a number in one cell Excel program recalculates all other cells with values depending on this cell I've changed. What's important is that the cell I change shouldn't depend on any cell dependent on it. For example, if I have a cell A that contains a formula A=B+2, and the cell B also contains a formula B=A-2, eventually I have no idea what actual values both those cells contain. In the context of DAGs, this is an important requirement - an absence of circular dependencies i.e cycles in the graph - guaranteeing that I know with certainty how each node of the graph affects other nodes. In some way, it can be compared to the corporate structure. I have my boss, and he, in turn, has his boss. His boss gives him a command, and he directs it to me. Everything works fine. Now imagine that I'm the boss of the boss of my boss. It would be really messed up. So, yes, no cycles. Another important aspect of Directed Acyclic Graphs is that they are directed. In other words, nodes of the graph are connected with arrows pointing in a certain direction. I can travel from node A to node B, but not the other way around. This characteristic is important in a sense that there are very specific paths along which the graph can be walked through, and also this direction arrange nodes in a sort of hierarchy. I know that the node B depends on the node A, and node C depends on the node B, so I know, for example, that if something about the node B changes, it will affect the node C but not the node A.

Well, Ok, getting back to the project in question. It claims that it uses a simplified version of DAG to implement its service-name based network infrastructure. To put it simply, there's some DAG network that determines where to direct specific users' requests. This statement raises two questions. First, what is the "simplified" Directed Acyclic Graph? I mean DAG is already as simple as you can get, how is it possible to make it even simpler? Well, at least my imagination fails when I think about possible ways to simplify this pretty straightforward structure. And the second question is how exactly DAG is used to implement this service-name based network navigation. To think about it, the statement "we use Directed Acyclic Graphs to route network requests" is very broad, vague and doesn't really say much. Similarly, I could say "We use graphs to route network requests" or "We use algorithms to route network requests." This is a phrase that sounds somewhat obvious and doesn't really contain any valuable information. So quite an obvious question is what actual implementation of DAGs the project uses to achieve whatever they are planning to achieve. Also, naming a broad mathematical concept among the project's technological aspects sounds somewhat suspicious. Although so far the project looks respectable enough, mostly due to its stellar, all-academic team and advisors, including people who actually laid the very foundations of the Internet.

Ok, without digging even deeper into this turf of mathematical abstractions, let's see, what else do they have? Another touted project's feature is its use of double auction principle. A double auction is when a number of sellers have the same resource, and also there's a number of buyers competing for the same resource. So all the sellers simultaneously declare the price for which they are willing to sell the resource, and the buyers declare the price for which they are willing to buy this stuff. If the selling price is lower than buying price then all the prices in between are legit for settling the bargain, and also there's some elusive point of equilibrium that makes double auction essentially a mathematical model. Without getting deeper now - mostly because I don't remember shit about this equilibrium stuff, and how the presence of multiple buyers and sellers affects the system and its balance - let's just note that the project X uses a double-auction for pricing of the services that are going to work on its platform. The most obvious question probably is why they consider double auction superior to other auction models, or, more generally, to other pricing models. If I understood correctly, the mathematical description in the White Paper explains how this model works, but it's still not entirely clear why exactly it's chosen, and what particular advantages it has in the context of this project. Ok, time to start digging deeper and figuring out stuff.


So, this situation looks like some sort of a dead end. I'm not ready to pose as a qualified member of this scientific organization, ready to offer smart and knowledgeable questions about the topic. Especially, in this heat, when my brain is melting as an unfortunate jellyfish thrown on the sizzling sand, in the center of the overheated desert. What do I know so far anyway? The project uses a modification of Direct Acyclic Graphs as a basis for its bottom level infrastructure. This bottom level infrastructure serves the same purpose as blockchain, namely, stores transactions, but I struggle to understand what kind of transactions this project - working in the context of providing a platform for exchanging services in the realm of IoT - might have.

Also, a side note, the service infrastructure itself uses a know-how of Serval project, which somehow squeezes service signature between the network (IP) and transport (TCP) layers of the OSI model. The main idea is that Internet infrastructure can be flexible. When I address services using DNS names I, in fact, narrow options of the service provider to route and balance my requests, direct them to the appropriate hardware, etc. There are load-balancing systems of course, and forwarding, and rerouting requests directed to specific subdomains and specific sites, as well as addressing specific services using specific ports.

But here's another thing. When I use ports, it implies that I know in advance which type of service listens to what port. For example, I know that HTTP server always listens 80 port. So, here lies another potential problem. If there's no beforehand agreement about what service uses what port - and we talk about a situation where there can be hundreds and thousands of various services, all using TCP/IP protocol - I don't know which port to address, I don't know anything about services the server provides and the nuances of their application level protocols. The way it currently works, there's always a beforehand agreement. For example, in order to use Skype or Telegram, I first install the software that knows both ports, their respective servers listen to, and how those application-level protocols work. Let's imagine the situation when I don't install any specific apps, but instead, I have some sort of universal browser that can work with different application protocols. I mean, in addition to browsing the Internet using HTTP protocol, this hypothetical browser can use Skype and Telegram protocols as well. ( if the addressed server provides those services) The question is, how does the browser know that? And how does the browser know how to maintain a dialogue with some service, about the existence of which on the server it has just learned? And how does this magical browser know, what kind of services some particular server provides?

Well, here comes into the limelight the concept of service based Internet protocol. The idea is that application level of communication shifts from the uppermost position of the OSI model closer to the network and transport layers of the Internet protocol. It means that information about the services and applications can be received by the client, for example, at the stage of initial establishing of TCP connection. Presently, it's done the following way. TCP connections are initiated by applications (e.g. browser) and established by the software working on the botton level and supporting the transport level protocol. (i.e. network driver) The network driver doesn't know anything about the application that initiated a respective TCP connection, as well as it doesn't know anything about HTTP protocol in general. For the network driver, HTTP is nothing more than a payload of TCP connection it establishes.

The point is, the transport level software currently doesn't get involved with any application-specific aspects of the Internet exchange, and it's kind of normal as long as there's no interdependence between network topology and types of services it supports. For example, what if I examine my network infrastructure and decide to direct all the voice calls to one specific server and file download requests to a different server. And here this division of labor between transport and application level software starts to create certain problems. For example, I want to make this redirection using routers because, well, if I allow all the traffic sent in my direction to reach a certain server, this server will become a bottleneck. Routers don't know anything about Internet apps, they know about transport/network aspects of communication and ports. Well, I can use ports and create rules, redirecting traffic according to the port number. But one app can use a range of ports, and I might not even know in advance: which ports a certain application uses. I only know, for example, that it's a "voice call service".

There's an interesting nuance to think about. According to the OSI model, there exists a clear divide between the bottom (transport/network) and top (application) levels of the Internet communication, but in fact, it's not entirely true. Port number formally is part of the transport (or network?) level, but often it implicitly suggests a type of service I address. Like, port 80 normally refers to HTTP (application level) protocol. In fact, it's a sort of convention that messes up the initial idea of the port in TCP/IP where port means nothing more than that TCP packets, containing this port in their header are going to be directed to a certain application. (or rather application "listens to the port" - filters respective packets from the stream of network traffic) It doesn't imply that it should be some specific type of application protocol, like that port 80 always means HTTP. Meanwhile, when I begin to add rules, instructing the router to redirect packets addressed to port 80 to a specific server, I implicitly teach transport level software to deal with application level aspects of the Internet communication. Which naturally leads to the idea of the service based Internet protocol. Because, well, I can write the list of rules, covering apps and port numbers I know. But let's consider a hypothetical situation when I have a number of voice call services, about which I don't know much, or maybe those services are constantly added and removed by different service providers, working on my platform and within my infrastructure. Like, I might have no idea what port numbers their services might use, the only thing I know is that it's voice call services. Which is kinda normal situation because there's no principle in the Internet architecture saying that certain types of applications should use certain port numbers. Those are just matter of fact conventions.

So, this proposition, particularly used in Serval, is making service identification a part of the transport/network level of the Internet protocol. This way, in the previous example I can redirect voice call traffic to a specific part of my infrastructure using "voice call service" signature embedded, for example, into TCP header. The idea is very sensible. I try to do this kind of operation anyway using ports, but the relationship between ports and applications is just a convention. For example, what if we had like, thousands of different custom video streaming services, using whatever ports their designers decided to use. In this case, I would prefer to rely on something more general, like a signature, saying that this connection belongs to a voice call service or video streaming service. Then I'd be able to perform my transport level manipulations, using this signature to determine what type of the service I deal with.


Ok, speaking of Directed Acyclic Graphs (DAGs) In the context of blockchain-like structures, (or like, derivative architectures inspired by the blockchain) the usage of DAG based structures raises several questions. For example, such structures (like in IOTA) promise fast transaction confirmation due to the fact that they don't use blocks and heavy computations to hash them. As I understood, anyone who initiates a transaction takes care of a couple of previous transactions, confirming them. So the system doesn't have miners, and its participants confirm the transactions themselves. And also, each transaction is confirmed separately. I think, what for example prevents any participant from confirming malicious transactions. In the classical PoW the amount of work necessary to confirm transactions essentially leads to the situation when the trustworthiness of the whole system - the fact that it confirms only legit transactions and rejects incorrect and malicious transactions, like double spends and stuff - is guaranteed by the majority of active participants, putting tangible efforts into preserving the system in the correct, truthful state. Like, we assume that a lot of people, who put significant efforts into something, in most cases act sensibly or at least according to the rules. In other words, it's unlikely when a lot of random people decides to break the rules and do something malicious without having a clear incentive to do so. So, essentially, one key element of the security of PoW consensus-based systems is the amount of effort, computational power, and electricity its users spend to keep it honest. (In blockchain technical terms it's called network consensus.) To break such a system a hacker needs to spend the same amount of efforts and electricity, which would require a conspiracy on the large scale, which would include a lot of people, power plants, and stuff. In other words, it's unlikely.

To draw an analogy the PoW network consensus can be compared to the situation when a large group of people pushes a heavy loaded cart in a certain direction, for example, toward the village. They push the cart in that particular direction not because they agreed on that in advance; in fact, each participant of the process acts independently, according to his personal interests. The thing is, pushing the cart in that direction is just the most sensible thing, giving every honest pusher the maximum payoff. The potential attacker would need to gather a similarly large crowd of people, who would agree to exert significant physical efforts, so they would be able to outperform honest pushers, and the cart would start moving in the direction opposite of the village. Two things worth noting here: if malicious pushers try but are unable to overcome honest pushers they still spend resources, electricity, etc. The payoff for malicious activities should be significant and worth this titanic effort.

The heaviness of the cart here is an important factor. For example, the system of network consensus based on the majority vote is considered significantly less secure, simply because an attempt to break it doesn't cost the attacker anything.

The second key element is that the PoW principle creates a situation when active network participants (miners) gain the most when they act honestly, and possible payoffs for malicious activities in the case of success are insignificant. Particularly, It keeps miners from any spontaneously devised collective attempt to hack the system. It's worth noting that both elements are essential for the security of the system based on PoW.

So compared to PoW consensus paradigm the system described under the name DAG doesn't really look secure. Like, if any participant can easily confirm (hash) transactions - the procedure doesn't contain artificial computational difficulty - then what prevents participants from conspiring in order to populate the network with malicious transactions, double spending and such. For example, I can team up with another person; he'll be sending to the network double spend transactions, and I'll stamp signatures on them, and vice versa. Certainly, without the artificial difficulty of hashing, transaction confirmations are going to be fast, but, at the same time, it kinda defeats the purpose of the blockchain as a network with the guaranteed honest behavior of all its users.

( Here it's worth noting that honesty of the blockchain network is achieved entirely by the set of principles and incentives that makes a large group of random people act in a certain way, according to game theory principles. In other words, it's a trustless system. For example, when I make a money transfer using traditional banking service, I similarly have some confidence that I won't be cheated. This is because I trust banking system because banks are bound by laws enforced by governments. Now let's imagine the state of total anarchy, states and laws don't exist anymore, and stuff. Banking systems won't work the same way, and it's likely I won't be able to use them because I won't be able to trust them. There will be nothing preventing them from cheating. On the other hand, I'll still be able to trust blockchain consensus-based systems because they are based entirely on Math, i.e. Game Theory, the rules of which never change - anarchy or not )

Similarly, we can remove the artificial computational difficulty from the traditional blockchain confirmation mechanism, in other words, remove W from PoW. The system will become incredibly fast, but it'll stop being trustworthy. If I can confirm any transactions, and it doesn't cost me efforts (computational resources, electricity, etc.) I'll probably be spending time confirming malicious transactions made by my cronies, and they, in turn, will be similarly confirming mine. I don't see so far how the proposed DAG consensus algorithm is not exactly this type of situation.

Another thing I wonder about - DAG proponents (IOTA) tout its ability to be a ledger with many branches. Like, the traditional blockchain is a strictly linear ledger; in DAG it seems like there can be multiple branches branching from one node. In other words, for example, the ledger has the last confirmed transaction, and I can create two new transactions and use the hash of that last confirmed transaction to "attach" both new transactions. The first question, what's actually the point? Well, technically it can speed up transactional activity since somebody who'd like to add a new transaction won't need to wait for the confirmation of the previously added transaction. But if the transaction confirmation is an easy breezy process anyway (doesn't include any PoW or whatever) it doesn't seem like a big deal. The second question, well, how anybody would be able to determine - which transactions are correct and which are not. One of the key paradigms of the classical blockchain is that there can be only one correct branch. Blockchain can branch too, and branches normally mean that there is some discord among miners, and two different groups added two different blocks, contradicting each other. It also means that one of the blocks is incorrect.


Ok, continuing this process of technical ruminations sponsored by carb intoxication and the urgency of imperative. Also, I made a mental note that even when I don't remember shit at the beginning - like, all the thoughts are scattered, and there's only a layer of thick impenetrable fog of war, where the respective picture is supposed to be - the process of writing shit down actually helps clearing things up and restore the relevant imagery, and its relation to the appropriate concepts, and stuff. Although it's not a fast process, it's still somewhat reliable and capable of miracles; like conjuring stuff out of the seemingly empty brain. Like a rabbit from the hat or something. I think it has to do with the firing of certain synapses or something like that.

So, there are three concepts I walk around - carrying with me a blunt instrument of dilettante curiosity, with which I plan to cheerfully dissect them like a child asking some impossible questions nobody has a prepared answer to - the service name based Internet protocol (Serval), Directed Acyclic Graphs as an underlying architecture of a new generation blockchain-like structures, and continuous double auction as a model, allowing to find an optimal equilibrium price on the market inundated with sellers and buyers, who have no idea what costs what. (or how to set the correct prices or anything) Speaking of continuous double auction, the method allowing to find a point of equilibrium, as I understand it, boils down to arranging sellers and buyers in a way when they comprise somewhat of a funnel - I arrange them in two parallel columns: on the left, I arrange sellers, so their prices follow in the ascending order - on the top there's a seller with the lowest price, below him the seller who has slightly higher price, and so on. On the right, I similarly arrange buyers, but this time the amount of money they are ready to pay follows in descending order: on the top is the buyer who's ready to pay the most, below is somebody who is ready to pay less, and so on. So, as a result, I get a funnel. At its top the funnel presents the widest range of possibilities to settle a bargain - those are ideal seller and buyer; the seller asks a very low price, and the buyer is ready to pay a very high price - so any price between these two borderline points is a potential equilibrium price. (Also, it worth noting that the distance from the equilibrium price to the seller's price is the seller's additional monetary win, like, cuz he didn't initially expect this additional amount of money. Similarly, the distance between the equilibrium price and the buyer's price is a win for the buyer. If, for example, the bargain is settled on buyer's terms i.e. the maximum price, the seller gets the maximum additional benefit, and the buyer gets no additional benefits. Which is not cool, so the fair price should be somewhere in the middle.) So, each following pair of seller and buyer in this structure has worse mutual terms - higher seller's price and lower buyer's price - and the range of prices, allowing to settle the deal, gets narrower as we descend down the funnel. Eventually, at some point, we reach the funnel's bottleneck where seller's and buyer's prices are approximately the same. Below this point, nobody can buy or sell anything because sellers ask more than buyers are ready to pay.

So the point is, this funnel construction in its algebraic form gives a set of equations, allowing to find the equilibrium price. The equilibrium price in this case - when we take into consideration all the buyers and sellers on the market - is the price that allows to "clear the market," in other words, there won't be a situation when somebody has money and ready to buy, but there's no more stuff for sale (deficit) or a situation when somebody is ready to sell, but there's not enough demand - people ready to buy stuff for this price (overproduction)

So, speaking of the double auction; it's considered a particularly cool system from both the mathematical and practical points of view because it allows finding ways to give everybody maximum satisfaction. It's widely used on Stock markets, Forex, etc. Getting deeper into comparison between different auction types will be way beyond my scope, but at least I can nitpick on various aspects, regarding application of this technique in this particular case - a subsystem, offering a marketplace for services in the IoT sphere - maybe I can question the number of auction rounds or something like that.

Returning to the other components of project's technological stack. Once again, its usage of Direct Acyclic Graphs. As I noted above, I don't really understand what advantages can be derived from the structure that, instead of one sequence of connected nodes, (blocks, whatever) allows an indefinite number of potential branches, spreading from any single node. Particularly, how can I tell what branches represent correct transactions, and what branches contain garbage and malicious activities and therefore need to be "withered." Plus, if there's no problem with transaction confirmation speed due to the lack of PoW (which raises a different question) then what is the point of this branching at all? To make the structure more convoluted and hard to navigate or something? And also, there's another potential problem. For example, the traditional blockchain is considered secure and immutable because it's hard to rehash blocks. It takes a lot of PoW. It successfully prevents malicious actors from trying to tamper with the existing blockchain information because any change to any transaction would result in the necessity to rehash the block where that transaction resides. Plus, it would also be necessary to rehash all the following blocks, added after that transaction took place. Since the standard blockchain architecture is a single sequence of blocks, there can be a lot of blocks added on the top of the targeted block, and considering the enormous computational difficulty of rehashing even a single block, a potential hacker faces insurmountable amount of work he would need to do to alter a single transaction. Well, this is not so in the DAG system where hashing is easy, and, due to endless branching, each transaction, even the one made long ago, can be located in a "leaf" of this graph, so the hacker would need just to rehash it, and there would be no additional transactions attached to targeted transaction that would need to be rehashed as well. So, the task of the hacker will be very easy in this case.


Returning to yet another aspect of the X project's architecture, namely, it's usage of service-name based network protocols. The idea was initially developed by the so-called Serval project in Princeton. In fact, they didn't do anything that would drastically change the existing Internet standards. They simply squeezed this service identifying information in some reserved bytes of TCP packets. So it doesn't require to rebuild the Internet from the bottom up, which would be implausible. Starting from that, what actually can be achieved, using this additional information. The typical problems most online service providers face are the following. How to balance and redistribute Internet traffic according to users' geographic location, the level of load in various parts of network infrastructure, type of Internet traffic, etc? For example, is there any way to reroute all the traffic related to video-streaming to the respective high load servers, right from the start, without channeling it through a single entry point that can become the system's bottleneck in this case.

One thing I don't quite understand is that X project claims that they implement this service-based routing on top of the existing network infrastructure and protocols. But isn't it some sort of application level then? Well, because the key idea of Serval is to embed service based routing somewhere between TCP and IP, in other words, this new paradigm has to be implemented on the transport/network level of the Internet communication. Otherwise, it's not something particularly new.

Like, how Serval project founders see the ideal realization of this service-based communication. In the ideal case, IP addresses don't matter. As I understand it, I don't address any specific IP host, apart from the entry point, through which I'm re-routed to one or another part of service provider's network infrastructure, based on the service I request, my geographical location, etc. In the ideal case, I don't even need to access some specific entry point that can become a potential bottleneck since probably all the requests to a specific online service provider are going to pass through it first. By the way, in the ideal case, the same service (probably the service with the same signature or something) can be provided by a number of companies, and I cannot care less to which one of them I'm going to be directed eventually. Like, similarly to the domain name resolution, the ideal implementation of the service-based Internet stipulates the existence of service-name resolution services. The key difference would be that those services would provide a kind of dynamic routing, as opposed to current static DNS - IP matching tables. A good example is, once again, torrent networks, where I simply request a resource with a certain signature from the first host that comes across, (specified in the torrent file) and then I connect to other hosts, from which I learn whom else I can connect, and so on. In this case, such a system - its topology - is in constant flux, it is dynamically changing all the time, reflecting the current situation. As opposed to static DNS tables. Service-name based Internet communication is going to be something like that, the way I understand it. For example, I have a file similar to a torrent file that contains a signature of a certain service, say, video streaming service, and the list of IP addresses of hosts that I can address to get the additional information. Those hosts don't necessarily provide video streaming service, although they can. Instead, they might contain dynamically updated information about various hosts and subnets all across the Internet, providing this service. As a matter of fact, what might likely happen in this hypothetical situation is the following: I address the host, which address I learned from the torrent... eh... service file, and the host - after determining my geographical location and the type of device from which I access the Internet, and examining the current situation with active streaming services and their load, as well as evaluating my requirements to the network bandwidth and speed - makes an optimal decision and connects me to some server - that is active, located geographically close to my location, is currently not under heavy load, and provides a bandwidth and connection speed that would satisfy me. Something like that. Mind you, when I first address such a system, I have no slightest idea where I will eventually be directed. This suits the paradigm of the dynamic Internet. Like, for example, what prevents billions of Internet users from being online service providers at the same time? Probably, dynamically changing IP addresses, low bandwidth, and performance of individual workstations. This is the legacy of the stone age of the Internet where there existed a definitive split between service providers and service consumers, and service providers were supposed to be robust servers with stable IP addresses, and high network bandwidth, performance, and stuff. By the way, this paradigm is also successfully broken by torrent networks where their members act as data providers and data consumers at the same time. And the concerted effort of a large number of participants successfully makes up for the low bandwidth and performance of individual hosts. I constantly return to this example because, in my view, torrent networks are one of the most miraculous and amazing things on the Internet, considering how stable, fast, effective, and robust they are.

If we don't talk about some hypothetical ideal situation when there's some service name resolution infrastructure in place and stuff, there are still benefits that can be gained from just inserting service-name signatures into TCP packets. (The way it is done by the Serval project) For example, I can have a sort of a gateway on the edge of my service infrastructure, which sole purpose is to extract service-name signatures from the packets and direct respective traffic accordingly. For example, like, I know that I have a server somewhere in the depths of my datacenter that provides a video service according to some protocol specified by #video456 service signature, and it listens to the port, say, 43678. The client in this case doesn't need to know anything about the specific server and port, through which the required service can be accessed (in the case of the standard Internet communication, in order to access a specific service, I need to know in advance the port this service uses) This also gives me a lot of flexibility as a service provider. I can choose whatever ports I see fit to provide the services, in other words, I don't need to follow conventions or try to let every potential client know about these technicalities. I can implement multiple variations of the same service, using different ports. I can change my network topology on the go without affecting in any way incoming client requests, and so on.


Ok, some general information about X project's architecture to keep in mind. The project claims to revolutionize the sphere of IoT services through its innovative infrastructure and usage of the aforementioned technologies. The problem, which the project is aiming to solve, is defined as the difficulty of implementing widespread IoT infrastructure within the existing framework of static Internet topology. Especially, the problems get obvious when we think about mobile devices that are constantly moved from point to point, the IoT devices that can be intermittently connected to different hosts, located in different locations. Ok, the point is, when I address a certain service - say, the one that gathers the information about burned calories from, say, some gadgets runners attach to their buttocks - it really is a problem, if all those requests eventually close in on one single static IP address, identifying one specific burned-calories-counting-American-national server, located somewhere in the center of Nevada desert or something. Because the runners with gadgets on their buttocks can be evenly distributed across the world, so it would make sense to direct those network streams accordingly - to the parts of the service infrastructure, located in the respective regions. Even more so, the same services (identical technologically, using the same protocols, conventions, etc.) can be provided by totally different companies. Like, I don't really care, which particular company provides this service in my region; I just address some more or less abstract #buttock-service and let the global Internet infrastructure to sort things out. Similarly, when I get access to the Internet, it doesn't really concern me, what telecom companies, backbones, network infrastructures, etc., are responsible for the Internet in my region. I only have to deal with the conventional concept of #TheInternet, which is the same wherever I'm located - US, Iceland, China, doesn't matter - in this case, #TheInternet is a concept abstracted from the particulars, regarding regional operators, technical infrastructures, etc. The same principle can be applied to IoT services, as well as the wide range of other Internet services. So far, speaking of Internet services, it's absolutely not so. Internet services, in most cases, rely on the relatively static infrastructure, predominantly geographically located in the region, under which jurisdiction this service operates. It creates a situation when, for example, smart homes in China use command and control servers located in California and other weird stuff. So, yes, this is roughly how the X project formulates the problems, inhibiting the spread and adoption of IoT.

The list of technologies involved in the project is the following:

The blockchain-like fast DAG structures. The project calls the technology Compacted Directed Acyclic Graphs. Something similar to what had been implemented in IOTA.

The Truthful Continuous Double Auction (TCDA). The project announces this algorithm as a basis for its IoT service marketplace.

Service-centric Networking (SCN). This is probably the key innovative technological aspect the project has if we take into consideration its declared goals. Simply put, it's a way of addressing services by names instead of IP addresses. It's quite different from DNS because domain names are mostly attached to IP addresses that rarely change. So, once again, it's a static location-centered system. With SCN, the way I understand it, I never know in advance where my request is going to be directed; it's a dynamic system. Also, I don't care, as long as I can access the service I need.

The system consists of four layers. The uppermost layer is the Service layer, where Services are registered, announced, discovered by end users, etc. Below lies the Routing layer. Here, as I understand, the project applies its SCN routing paradigm. As a user, I request a certain service I learned about on the Service layer, and the Routing layer carries me deep into depths of platform's networks and sub-networks - literally, I have no idea where I'm going to end up eventually - it's SCN. Below the Routing layer lies Service-chain layer. The way I understand it, this is where all the settlements - regarding setting prices, ordering and paying - take place. Well, because this is where IoT service marketplace is located. Also, this is where X project applies TCDA to achieve the maximum fairness, regarding participants' commercial activities: buying, selling, etc. Also, there are some Smart Contracts there too. Plus, the State Machine, in other words, the usual stuff. The bottom level is called Transaction-chain level, and this is where presumably all the transactions conducted on the platform are getting recorded into blockchain-like structures. To be accurate, it's not a classical blockchain, but rather a data structure similar to that of IOTA. It's based on the Directed Acyclic Graph (DAG) structure (although, to be more precise, in project's documentation it's called Compacted Directed Acyclic Graph - how exactly it's compacted is a big and interesting question) that has quite an exhaustive explanation in IOTA documentation. What I understood so far is that they screwed up the consensus algorithm, got rid of the blocks, plus currently as a temporary measure they have some sort of central server that gives additional transaction confirmations, which essentially makes IOTA, not a decentralized system, and other weird stuff.


Ok, it's time to refresh in my memory all that stuff. Acyclic graphs as a foundation for quicker and more scalable blockchains, Internet IoT related services organizing their communication through abstract service names instead of concrete IP addresses, a principle of continuous double auction as a driving mechanism of respective marketplaces - all that stuff powering a magnificent infrastructure of the venerable X project, which, by the way, consists of four layers, where on the top layer located IoT services themselves - their announcements to the world and stuff. On the following layer below there is a routing system where - how one might expect from the name - the platform organizes a communication between services and clients, using the aforementioned principle of locating respective services using their names and respective DHT tables. On the even deeper level, there are Smart Contracts powering interactions between service providers and their buyers through the mighty and time-honored mechanism of the continuous double auction, to which the X project adds an additional twist, therefore it's called there a truthful (fair, honest whatever) continuous double auction. In other words, it's time to brush up the dust from all the knowledge and insights about all disconnected and deeply confusing things I learned about a couple of weeks ago. Then it'll be time to strike. Well, so far, I have successfully recalled the key components, comprising that system, as well as the fact that it is dedicated to creating a well-organized infrastructure for IoT services - a way for IoT service providers and IoT consumers to find each other and engage in mutually beneficial monetary transactions. The project's pitch and outline of problematic area both are based on the fact that currently IoT sphere suffers from insufficient flexibility of the current Internet infrastructure, founded at the time when people thought that there were going to be a handful of big servers, providing services and content to everybody else, who would play an essentially passive role of consumers. In other words, it's a star-like Internet topology. The problem is that it's fixed in this state because all the Internet communication fundamentally relies on IP addresses and DNS tables - both mechanisms are more or less fixed and inflexible. For example, what if my service is evenly distributed among multiple computers all across the world. I mean, like torrents. And here the project introduces a paradigm alternative to the standard client-server Internet infrastructure, namely, the paradigm is more similar to that of torrent exchange. A quick reminder - a torrent network is a constellation of random computers - that can be located anywhere - united by the fact that, say, all of them together store and distribute one particular file. Different computers on the network can contain different pieces of that file, and the information of which computer stores what is similarly distributed across the torrent network. More accurately it's called DHT table, due to the fact that there's a system to how network participants find various pieces of a file scattered throughout the network. It's like a tree of hashes or something. The key thing is that when I join the torrent network I don't know in advance with whom I'll be communicating, neither I know what pieces of information I need are stored where. It's like a process of gradual discovery: I access one computer that has some up to date tips regarding whom I could ask next; they, in turn, similarly have information about other participants, and so on. The thing is that, despite the fact that such system looks utterly chaotic, it presents a certain logically arranged structure called DHT table, in other words, a table of distributed hashes. So, Ok, here's this concept of service-centric networking - initially developed by the Serval project - that introduces the similar paradigm in regard to how computers on the Internet find each other and communicate. Currently, for example, if I'm interested in watching a movie I recall a name of some streaming service, like Netflix dot com (that is, the domain name) type it into my browser's address bar, and then the following happens: The browser makes a specifically formed request to the chain of DNS servers, which is passed from one server to another until one of them returns an IP address (or the list of IP addresses) currently associated with this domain name. Ok, here's a problem. Domain name resolution services don't take into account that Netflix can actually have servers all over the world, and some of them can be geographically close to me, so it would make sense to send me the address of that particular server located close to me. But the domain name resolution system is incredibly primitive and straightforward. Basically, name servers store tables, containing information about which domain names refer to which IP addresses. When I send a request to resolve a certain domain name to the DNS server, it simply takes that table of his, looks what IP address there is written after the requested domain name, and sends this information to me. The name server is concerned neither with my location nor with the possible infrastructural complexities and quirks of the service in question. So, it leads to the situation where however complex and decentralized the Internet content provider's infrastructure might be, all its users from all over the world are initially directed to one specific entry point. Which makes me think that this point is likely to become a system's bottleneck. Plus, what if we also take into consideration some potential major communication problems, leading to outages of whole segments of the Internet and so on. Apparently, major online services can survive this by quickly changing IP addresses associated with their domain to the ones, located in the unaffected area. Nevertheless, despite all the service's mighty and widespread infrastructure, such an event can still lead to its temporary blackout - during the period when all the DNS servers update their respective tables and stuff - which shows how fragile and inefficient this architecture actually is. Well, the alternative is a system, similar to BitTorrent, with DHTs and stuff - what was proposed by the Serval project. The idea is that I address a service by name, which for everybody - including me and routing services that potentially are going to be implemented to serve such requests - is highly abstract and not in any way implies the service's actual location. Gosh, I have a butterfly flying in my room.


What do I remember about the Serval project? The key thing about its service-centric routing architecture is that the project introduces it at the bottom of the OSI model - on its transport-network level. Service signatures are embedded into reserved bytes of TCP packets, so technically it allows implementing the following scheme: For example, I conduct all the exchange with a special server that constantly tracks network topology and knows all the entry points of the Internet services, represented by specific service signatures. So I use this server as a kind of intermediary - like a VPN server or router or something - and it, in turn, directs all my traffic to a specific point within the infrastructure of the service I use. This entry point can be chosen based on the server's load balancing strategy, my location, other factors, say, a current situation with major Internet backbones, and so on. On the one hand, I address the aforementioned server as I would address a DNS server. Like, the initial information I have is the name of the service, say, #ultra-video-streaming-45646. Apart from that, I know nothing - like, its IP address and stuff - so I address this service-name resolving server, as an entity that may have some idea where the actual service's hosts might be located. The difference is that instead of simply returning an IP address of requested service - essentially what an interaction with DNS servers boils down to - it reroutes my TCP connection with the service's signature to the respective service's entry point. Ideally, it also takes into account my location, so, if the service has a distributed infrastructure, I'm connected to one of its points, located geographically close to me. Drawing an analogy with BitTorrent, the service-name resolution server can be a part of a large distributed network. For example, a torrent exchange starts with addressing a tracker node - an IP address recorded in the torrent file. Tracker node is a host on the torrent network containing both chunks of the file in question and up to date information about what other hosts on the network have chunks of that file. As a matter of fact, the torrent file contains a whole list of trackers. (not to be confused with what's often called "torrent tracker" - a server cataloging various resources available on the torrent network and the respective torrent files. Such server can be easily shut down, while the torrent network itself cannot) The idea is that if, for example, one specific tracker leaves the torrent network, (like, it's turned off or it's under DDoS attack, like, it can be anything) it won't paralyze the process of distribution of a certain file. There actually can be different torrent files, featuring different lists of trackers, although all they would point to essentially the same file, identified by its hash signature, pieces of which are scattered all over the torrent network.

Similarly, the network of nodes, supporting service-name resolution can be large, distributed, and fluid. I don't know, for example, there may be some key servers, providing the relevant and up to date information about all the active nodes, doing service-name resolution. Once again, it can be some server, geographically located close to me. It might be important, considering that it's always going to serve as an intermediate point in my communication with a specific service. By the way, why is it important that service name resolution is implemented on the transport-network level, and not, for example, on the application or session level? For example, currently, a service provider can redirect user's requests based on TCP port number - essentially a small part of the transport level protocol referring to what's going on at the application level. Like, if I see the TCP traffic, addressed to port 80, I realize that it's an HTTP exchange, - most likely - an exchange between user's web browser and an HTTP server program (Apache or whatever) Ok, I'll need to take a look at Serval docs once again.

Ok, a quick refresher on another X project's feature - Directed Acyclic Graphs based blockchain. I've already written about Directed Acyclic Graphs (DAGs) as a useful mathematical concept, allowing to model hierarchies and dependencies. In X project's docs, it says that they use CoDAG modification of DAGs or Compacted Directed Acyclic Graphs. How they are actually compacted is still a big and mysterious question to me. Meanwhile, let's brush up what we know about DAGs' application for building blockchains... eh... decentralized ledgers. Well, technically, it's a different type of structure, and some things - that we know fundamentally determine how the classical blockchain works - wouldn't work for acyclic graphs. For example, one of the important factors, guaranteeing immutability of information in a certain block on the blockchain is a number of additional blocks, added on the top of that block. Technically - considering the last added block - it's possible to change the block's content and recalculate its hash. There's, of course, that computational difficulty of calculating blockchain hashes and all that, but theoretically its possible. It's not accidental that any transaction on the blockchain is considered valid only after several other blocks added on the top of the block, containing that transaction. In this case, altering that transaction would require both recalculating the hash of the block, containing it, and all the hashes of all blocks added later. For all intents and purposes, it's virtually impossible in the situation when the majority of miners (who, by the way, taken together, have much more computational power than any potential malicious actor) relentlessly hash and add new blocks. On the other hand, if the ledger is organized in the form of Directed Graph - a tree if you wish - it means that technically blocks can be added not only on the top of the last block but on the top of any previously added block - in other words, any block can "spawn" multiple branches. (Technically adding something "on the top of the block" in this case means that the hash of that block is added to the content of the block that's added "on the top." In other words, it means that the block added "on the top" of another block is cryptographically attached to the latter. Therefore, for example, if I alter the content of some block it will invalidate both its hash and all the blocks added on top of it.) So the thing is, if we have multiple branches, we cannot guarantee that all the blocks will have other blocks added on the top of them. It means that in such a structure there will be numerous blocks that can be potentially compromised.

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!