Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well said.

An example I encountered was someone taking the "KISS" approach to enterprise reporting and ETL requirements. No need to make a layer between their data model and what data is given to the customers, and no need to make a separate replica of the server or db to serve these requests, as those would be complex.

This failed in so many ways I can't count. The system instantly became deeply ingrained in all customer workflows, but they connected via PowerBI via hundreds of non-technical users with bespoke reports. If an internal column name changed or structure of the data model changed so that devs can evolve the platform, users just get a generic error about Query Failed and lit up the support team. Technical explanations about needing to modify their query were totally not understood by the end users and they just want the dev team to fix it. Also no concern in any way for pagination, request complexity limiting, indexes, request rate limiting, etc was considered because those were not considered simple. But those can not be added without breaking changes because a non-tech user will not understand what to do when their report in Excel gets a rate limit on 29 of the 70 queries they launch per second. No concerns about taking prod OLTP databases down with OLAP workflows overloading them.

All in all that system was simple and took about 2 weeks to build, and was rapidly adopted into critical processes, and the team responsible left. It took the remaining team members a bit over 2 years to fix it by redesigning it and hand holding non-technical users all the way down to fixing their own Excel sheets. It was a total nightmare caused by wanting to keep things simple when really this needed: heavy abstraction models, database replicas, infrastructure scaling, caching, rewriting lots of application logic to make data presentable where needed, index tuning, automated generation of large datasets for testing, building automated tests for load testing, release process management, versioning strategies, documentation and communication processes, depreciation policies. They thought that we could avoid months of work and keep it simple and instead caused years of mess because making breaking changes is extremely difficult once you have wide adoption.



While I tend to agree with your position, it sounds like they built a system in less than 2 weeks that was immediately useful to the organization. That sounds like a win to me, and makes me wonder if there were other ways in hindsight that such a system could evolve.

>They thought that we could avoid months of work and keep it simple and instead caused years of mess because making breaking changes is extremely difficult once you have wide adoption.

Right. Do you think a middle ground was possible? Say, a system that took 1 month to build instead of two weeks, but with a few more abstractions to help with breaking changes in the future.

Thanks for sharing your experience btw, always good to read about real world cases like this from other people.


> While I tend to agree with your position, it sounds like they built a system in less than 2 weeks that was immediately useful to the organization. That sounds like a win to me, and makes me wonder if there were other ways in hindsight that such a system could evolve.

I don't think this is an adequate interpretation. Quick time to market doesn't mean the half-baked MVP is the end result.

An adequate approach would be to include work on introducing the missing abstraction layer as technical debt to be paid right after launch. You deliver something that works in 2 weeks and then execute the remaining design as follow-up work. This is what technical debt represents, and why the "debt" analogy fits so well. Quick time to market doesn't force anyone to put together half-assed designs.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: