Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Just out of curiosity, how would you design a distributed website like this? How can this be done simpler?


Well since this looks like a blog, either using a blogging platform, or a static website generator hosted on either a static website platform(1) or something like lambda + a cdn

(1): https://azure.microsoft.com/en-us/services/app-service/stati...


The article explicitly mentions needing to support writes with a globally distributed user base. This is not a static website.


Not everything in a static webapp needs to be static, just push most static things to the CDN and render some things like "user details" on the client... it just works out so much simpler.


and the login for your users?


By not making it distributed.


What is missing from the description apparently is the SLA for the site's downtime.

If it's really low, I can imagine why it was made distributed. Otherwise, I'd see that as a demo and an exercise.


> What is missing from the description apparently is the SLA for the site's downtime.

> If it's really low, I can imagine why it was made distributed.

If an extremely low SLA is the goal, I would not be home-rolling a distributed website technology stack. The more moving pieces you have, the more likely one of them is to fail.

The recipe for a high availability, high reliability website is relatively simple in the age of Cloudflare and other cheap hosted services. Introducing a lot of complexity and home-grown solutions is the last thing you want for high availability unless you have scores of engineers to maintain it and you can't solve it through traditional means.


While I agree in principle, a database-driven distributed site would need a bit more than cloudflare.

(For a personal informational site I'd take a static site generator.)


It is the author's own website, there is no contractual SLA.

In any case, with a distributed architecture there are many parts that can break. Fewer moving parts means fewer broken pieces.


That “distributed” quality itself is a choice that introduces complexity. There are many sites that deliver the same or greater level of value to users, and use a “boring” architecture and deployment.


Do you mind sharing some? Most websites I can think of are either highly distributed (e.g. Facebook, Netflix) or their customer base is geographically limited (Yelp).

Uploading media files to (or from) developing countries with weaker internet infra often results in timeouts and dropped connections. I tried uploading a 8GB file to Singapore S3 from Florida and my connection often timed out.

I'm trying to imagine how you can deliver a fast website to users around the globe without distributed systems.


Those are all sites with 1000s of engineers, real-world use, and actual evolutionary pressure driving their features. You want to compare to essentially static sites with some light user statefulness.


This guy's site is not netflix nor facebook, nor is he 10k developers to support those architectures.

If this guy were pragmatic, a Ruby on Rails/django app in heroku would do wonders. If he wants to promote React because that's what he sells, that's a different story.

The problem is the people taking what this guy says as "the modern way to do it" and then you find the messes you find at work.


Sorry for offtopicness but could you please email [email protected]? I want to send you a repost invite.


My email is the same as my username at the usual proton mail domain.

Sorry for my dumbness, but not sure what a repost invite is. Something good I hope XD.

Thanks.


Thanks, I'll send it!

For anyone who's wondering: a repost invite is a way of getting a post into the second-chance pool (https://news.ycombinator.com/pool, explained at https://news.ycombinator.com/item?id=26998308), so it will get a random placement on HN's front page. If the original submission is older than a few days, we don't re-up it, but rather invite the submitter to repost it. Then it goes automatically into the pool. So yes, it's something good :)


Done. Thanks!


At least half of the gains can probably achieved by just rolling a distributed redis setup and not distributing the PostgreSQL database. This should cover most non-personalized read operations which on this type of site should be most of them. It also seems like the database distribution is an attempt to solve a problem of his own making. He states that each blog posts page view needs 30 database queries in the background. This to me suggests an inefficient data structure or complexity for complexity’s sake. Heck, he could probably even achieve 80% of what he’s trying to do by delivering selected slow but static queries from edge computing KV stores without having any further distributed backend or database servers. But ofc this whole project is a sales pitch and he would get the attention he’s receiving with a simple Django+Postgres+Redis setup in a single region with just basic CI/CD.


It doesn't need to be distributed. It's a mostly static site with some dynamic parts.

A single small server running a typical web framework (anything from django to asp.net to laravel) can easily serve all these pages in milliseconds. Add a CDN in front to cache and serve quickly to users around the world.


Ruby On Rails




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: