Hacker News new | past | comments | ask | show | jobs | submit login

I'm sympathetic to your situation, but it's possible that the senior was still right to remove it at the time, even if you were eventually right that the product would need it in the end.

If I recall correctly they have a measure at SpaceX that captures this idea: The ratio of features added back a second time to total removed features. If every removed feature was added back, a 100% 'feature recidivism' (if you grant some wordsmithing liberty), then obviously you're cutting features too often. 70% is too much, even 30%. But critically 0% feature recidivism is bad too because it means that you're not trying hard enough to remove unneeded features and you will accumulate bloat as a result. I guess you'd want this ratio to run higher early in a product's lifecycle and eventually asymptote down to a low non-zero percentage as the product matures.

From this perspective the exact set of features required to make the best product are an unknown in the present, so it's fine to take a stochastic approach to removing features to make sure you cut unneeded ones. And if you need to add them back that's fine, but that shouldn't cast doubt on the decision to remove it in the first place unless it's happening too often.

Alternatively you could spend 6 months in meetings agonizing over hypotheticals and endlessly quibbling over proxies for unarticulated priors instead of just trying both in the real world and seeing what happens...




At one time Musk stated that SpaceX data suggested that needing to add back 15% of what was removed was a useful target. He suggested that some of the biggest problems came from failure to keep requirments simple enough due to smart people adding requirements, because they offered the most credible and seemingly well-reasoned bad ideas.


Thanks I couldn't remember the exact number. 15% seems like a reasonable r&d overhead to reduce inefficiencies in the product. But I suspect the optimal number would change depending on the product's lifecycle stage.


He also made it mandatory that all requirements had to be tied to a name. So there was no question as to why something was there.


How would you measure / calculate something like that? Seems like adding some amount back is the right situation, and not too much either, but putting a number on it is just arrogance.


Either accepting or dismissing the number without understanding its purpose or source can also be arrogance, but I agree that throwing a number out without any additional data is of limited, but not zero, usefulness.

When I want to know more about a number, I sometimes seek to test the assumption that an order of magnitude more or less (1.5%, 150%) is well outside the bounds of usefulness—trying to get a sense of what range the number exists within


No, dismissing something that is claimed without evidence is not arrogance, but common sense.


I think we're getting hung up on the concept of dismissing. To question skeptically, to ask if there is evidence or useful context, to seek to learn more is different than to dismiss.

The 5 Step Design Process emphasizes making requirements "less dumb," deleting unnecessary parts or processes, simplifying and optimizing design, accelerating cycle time, and automating only when necessary.

Musk suggests that if you're not adding requirements back at least 10%-15% of the time, you're not deleting enough initially. The percentage is an estimate initially based on experience, and now for several years based on estimates from manufacturing practice.


> How would you measure / calculate something like that?

SpaceX probably has stricter processes than your average IT shop, then it isn't hard to calculate stats like that. Then when you have the number, you tune it until you are happy, and now that number is your target. They arrived at 15%. This process is no different than test coverage numbers etc, its just a useful tool not arrogance.


I have no clue what you are saying. "They did it somehow"? How? Maybe they did not measure it, but Elon just imagined it. How can we tell the difference?


Yeah, not sure how you'd measure this apart from asking people to tag feature re-adds. And all that will happen is that people will decide that something is actually a new feature rather than a re-add because the threshold has already been hit this quarter. Literally adding work for no benefit.


Maybe it's just down to the way the comment was written and it actually played out differently, but the only thing I'd be a bit miffed about is someone more senior just coming in and nuking everything because YAGNI, like the senior guy who approves a more junior engineer's code and then spends their time rewriting it all after the fact.

Taking that situation as read, the bare minimum I'd like from someone in a senior position is to:

a) invite the original committer to roll it back, providing the given context (there isn't a requirement, ain't gonna need it, nobody asked for it, whatever). At the bare minimum this might still create some tension, but nowhere near as much as having someone higher up the food chain taking a fairly simple task into their own hands.

b) question why the team isn't on the same page on the requirements such that this code got merged and presumably deployed.

You don't have to be a micromanager to have your finger on the pulse with your team and the surrounding organisational context. And as a senior being someone who passes down knowledge to the more junior people on the team, there are easy teaching moments there.


There are a lot of cultural factors that could change my perspective here, but this is a reasonable criticism of the senior guy's approach.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: