I'm not talking about the consensus protocol of the blockchain itself, but of the p2p algorithms underlying it, e.g. using Kademlia for service discovery and message routing. I'm asking why a distributed system would choose something like Consul (which uses Raft, and requires a coordinator node) instead of running a decentralized protocol like Kademlia (which has no coordinator nodes) within their distributed single-tenant environment.
I did a bit more research last night, and discovered that Bitfinex actually does something like this internally (anyone know if this is up to date?) [0] — they built a service discovery mesh by storing arbitrary data on a DHT implementing BEP44 (using webtorrent/bittorrent-dht [1]).
This seems pretty cool to me, and IMO any modern distributed system should consider running decentralized protocols to benefit from their robustness properties. Deploying a node to a decentralized protocol requires no coordination or orchestration, aside from it simply joining the network. Scaling a service is as simple as joining a node to the network and announcing its availability as an implementation of that service.
At first glance, this looks like a competitive advantage, because it decouples the operational and maintenance costs of the network from its size.
So I'm wondering if there is a consistent tradeoff in exchange for this robustness — are decentralized applications more complex to implement but simpler to operate? Is latency of decentralized protocols (e.g. average number of hops to lookup item in a DHT) untenably higher than that of distributed protocols (e.g. one hop once to get instructions from coordinator, then one hop to lookup item in distributed KV)? Does a central coordinator eliminate some kind of principle agent problem, resulting in e.g. a more balanced usage of the hashing keyspace?
Decentralization emerged because distributed solutions fail in untrusted environments — but this doesn't mean that decentralized solutions fail in trusted environments. So why not consider more decentralized protocols to scale internal systems?
Raft is a consensus protocol. You don't need it if you don't need consensus.
You can do service discovery etc with gossip protocols. You are right that you don't need consensus to have systems publish their own keys on a network.
Systems that use Raft or equivalent do have a need for consensus, for example CassandraDB/CockroachDB (when you do `UPDATE account SET balance = balance - 10 WHERE user='chatmasta';`, you need that transaction to go through only once globally, and you need the whole system to agree on whether it did), Kubernetes (when you ask for a database to be served on some hostname, you need a single instance to go up, and every load balancer to route to that same instance), etc.
If you have examples of systems that use strong consensus when it's not required, point them out. Stating "why do some systems use strong consensus when other systems (doing something completely different) get by without consensus" is a bit strange.
I did a bit more research last night, and discovered that Bitfinex actually does something like this internally (anyone know if this is up to date?) [0] — they built a service discovery mesh by storing arbitrary data on a DHT implementing BEP44 (using webtorrent/bittorrent-dht [1]).
This seems pretty cool to me, and IMO any modern distributed system should consider running decentralized protocols to benefit from their robustness properties. Deploying a node to a decentralized protocol requires no coordination or orchestration, aside from it simply joining the network. Scaling a service is as simple as joining a node to the network and announcing its availability as an implementation of that service.
At first glance, this looks like a competitive advantage, because it decouples the operational and maintenance costs of the network from its size.
So I'm wondering if there is a consistent tradeoff in exchange for this robustness — are decentralized applications more complex to implement but simpler to operate? Is latency of decentralized protocols (e.g. average number of hops to lookup item in a DHT) untenably higher than that of distributed protocols (e.g. one hop once to get instructions from coordinator, then one hop to lookup item in distributed KV)? Does a central coordinator eliminate some kind of principle agent problem, resulting in e.g. a more balanced usage of the hashing keyspace?
Decentralization emerged because distributed solutions fail in untrusted environments — but this doesn't mean that decentralized solutions fail in trusted environments. So why not consider more decentralized protocols to scale internal systems?
[0] https://github.com/bitfinexcom/grenache
[1] https://github.com/webtorrent/bittorrent-dht