> One solution is to treat startup cycles as a resource similar to e.g. size or backend servers.
The only way to achieve performance metrics in a large org IMO.
Google Search is still fast because if you degrade p99 latency an SRE will roll back your change. Macbooks still have good battery life because Apple have an army of QA engineers and if they see a bump on their Ammeters that MacOS release doesn't go ahead.
Everything else (especially talking about "engineers these days just don't know how to write efficient code") is noise. In big tech projects you get the requirements your org encodes in its metrics and processes. You don't get the others. It's as simple as that.
Never worked at MS but it's obvious to me that the reason Windows is shit is that the things that would make it good simply aren't objectives for MS.
As an ex-Microsoft SDET who worked on Windows, we used to test for those things as well. In 2014.
Then Microsoft made the brave decision that testers were simply unnecessary. So they laid off all SDETs, then decided that SDE’s should be in charge of the tests themselves.
Which effectively made it so there was no test coverage of windows at all, as the majority of SDE’s had not interacted with the test system prior to that point. Many/most of them did not know how to run even a single test, let alone interpret its results.
This is what Microsoft management wanted, so this is what they got. I would not expect improvement, only slow degradation as Windows becomes Bing Desktop, featuring Office and Copilot (Powered By Azure™).
Makes perfect sense. It recently became clear to me (e.g. [2]) that it's not a cohesive concept but to me personally this is the meaning of POSIWID [1].
Basically making Windows a good desktop OS is not in any meaningful way the "purpose" of that part of MS. The "purpose" of any team of 20+ SWEs _is_ the set of objectives they measure and act upon. That's the only way you can actually predict the outcomes of its work.
And the corrolary is that you can usually quite clearly look at the output of such an org and infer what its "purpose" is, when defined as such.
The only way to achieve performance metrics in a large org IMO.
Google Search is still fast because if you degrade p99 latency an SRE will roll back your change. Macbooks still have good battery life because Apple have an army of QA engineers and if they see a bump on their Ammeters that MacOS release doesn't go ahead.
Everything else (especially talking about "engineers these days just don't know how to write efficient code") is noise. In big tech projects you get the requirements your org encodes in its metrics and processes. You don't get the others. It's as simple as that.
Never worked at MS but it's obvious to me that the reason Windows is shit is that the things that would make it good simply aren't objectives for MS.