Their solution is too hot, but yours is maybe too cold. Somewhere is an approach that is just right. While I too am weary of those needing to use the latest craze on any simple problem, I've become equally weary of the "tried and true" orthodox. The Python/Flask-Postgres stack can be great for rapidly prototyping a functioning application, but sometimes this solution is unable to evolve or scale to address irregular needs. It's almost always fine for your very typical web app. It can struggle with more complicated data processing, especially where irregular resource utilization and complex workflow orchestration are concerned. Celery workers can only address those problems to a degree. Home grown ETL is easy at version 1 and usually a nightmare by version 2. It's a hard problem with lots of wheel reinventing, so it's good that there are some emerging common platforms (particularly things like Airflow).
A full on hadoop stack is rarely warranted, but I can understand the reasoning behind wanting a flexible enough processing capacity to accommodate any anticipated load regardless of frequency.
Yes, but if you try to design for what you anticipate to be the bottlenecks as you scale, you are almost guaranteed to discover the real bottlenecks are in a entirely different part of the architecture than you anticipated.
So there is still a good argument to be made for developing the Minimum Viable Product in whatever technologies are most productive for your developers, and figure out how to scale as you grow.
A full on hadoop stack is rarely warranted, but I can understand the reasoning behind wanting a flexible enough processing capacity to accommodate any anticipated load regardless of frequency.