I agree mostly, though I think the "break things" bit got twisted and
misunderstood.
We were supposed to break; limits, barriers, status-quos, ossified
ideas... Instead we broke; treasured social norms, privacy, mutual
respect and dignity. There's a difference between benevolent
innovation and reckless iconoclasm. I think it started the day Peter
Thiel gave money to Mark Zuckerberg.
Exactly. Words that seem different but mean whatever you want them to
mean, including the exact opposite. tools for peace <--> weapons of
mass destruction etc.
>> '"On May 2, 2014, Zuckerberg announced that the company would be changing its internal motto from "Move fast and break things" to "Move fast with stable infrastructure".[40][41] The earlier motto had been described as Zuckerberg's "prime directive to his developers and team" in a 2009 interview in Business Insider, in which he also said, "Unless you are breaking stuff, you are not moving fast enough."[42]"'
Last night I changed some solid-js ui code to replace mutating game in ui state with updating ui state with mutated clones (cloning is efficient & shares most data, optimizations made for AI efficiency long ago)
ofc, with these stale game references around, I soon got reports of broken things: targeting was broken, pvp was broken, fade out animations were broken
A few hours later these issues were resolved. The players are used to these things happening sometimes. It's fine since the stakes are low. It's just a game after all. & being free, the active playerbase understands that they're QA
I always thought move fast and break things used at FB was to empower the ambitious, talented, fresh crop of ivy-college grads with confidence to move forward with poor decisions due to lack of experience.
You’re closer to the truth, but with a bit of a harsh bias. It was simply permission to make mistakes. Sometimes you get it wrong, and it’s better to get more done and risk mistakes instead of moving cautiously.
Facebook was famously unit-test sparse, for example.
No? You’re projecting what you want it to mean. The “break things” is don’t be afraid to break functionality/features/infrastructure in the process of improving it (new features, new scaling improvements, etc etc). That’s why it was renamed “Move fast with stable infrastructure".
> The earlier motto had been described as Zuckerberg's "prime directive to his developers and team" in a 2009 interview in Business Insider, in which he also said, "Unless you are breaking stuff, you are not moving fast enough."
It’s about growth at all costs and then once Facebook got big enough they had to balance growth against other factors (+ the things people were doing that were causing breakages weren’t actually helping to grow).
Mottos like that live their own life. Take google’s “dont be evil” - people remember that, and see all the evil shit google does now, of course they are going to recall the motto and laugh at the irony. Whatever Sergey meant when he coined the phrase is irrelevant imo.
Maybe true. But then if it's just about development it's a rather
mundane old chestnut about reckless engineering versus good software
engineering etc. Granted, that's a different discussion and we can see
the tide turning now in terms of regulation and mandated software
quality.
Sure, the Post-Office/Fujitsu scandal, Boeing etc, show how bad
software actually ruins lives, but for the most-part the externality
imposed by the reckless software engineer is measured in "hours of
minor inconvenience".
That said.. I wonder if you did a ballpark calculation of how much
harm lies behind the Google Graveyard [0], whether the cost of what is
broken outweighs the benefits of it ever having been made?
Engineering was literally taught to me in a well respected engineering university as making an appropriate cost/reward trade off and being careful in taking that risk. But the economics of the business were important too as it was part of the competition of driving more efficiency into a system. In classical engineering, there can be more risk because you’re dealing with people’s lives and so you have to be more careful and add extra margins of error even if more expensive.
One person’s recklessness is another person’s calculated risk. The consequences of FB engineering mistakes are minimal in both impact to customers and FB’s business. As FB scaled, the impact to individual people is still largely minimal (perhaps even beneficial) but the impact to their own business is larger and same for their customers if their ads aren’t getting eyeballs. So they shifted as big companies do. It’s kind of the best case of thoughtful risk taking - we’re rolling out a new system and we don’t know what could go wrong at scale and we put in monitoring of what we think we need. If there’s problems we’ll catch it with our monitoring/alerting and rollback or fix. You see the outages but not 99% of changes that go in without anything going wrong which lets the business resolve issues quickly and cheaply.
As for Boeing and Fujistsu, I’d say those are very different situations and aren’t an engineering problem nor do they indicate a move fast and break things mentality. As with many things like that, the engineering mistakes are a small detail within the overall larger picture of corruption. Boeing wanted to escape being classified as a new aircraft and met a perfect storm of skimping on hardware and corrupting the FAA through regulatory capture. I don’t fully understand Boeing’s role with the recent failures as a subcontractor is involved, but my hunch is that they’re nominally responsible for that subcontractor anyways. Same goes for Fujitsu - bad SW combined with an overly aggressive prosecution mandate and then cover ups around having made mistakes based on the assumption that the SW was correct rather than assuming new SW that hadn’t run anywhere before may contain bugs (not really sure whether Fujitsu hid the bugs or if politicians did or what happened but certainly the Post Office officials hid the reports of the auditors that found bugs in the sw and continued with prosecutions anyway).
Btw in engineering classes, all the large scale failures we were taught about involved some level of corruption or chain of mistakes. A contractor not conforming to the engineering specs to save on costs (valid optimization but should be done extra carefully), overlooking some kind of physical modeling that wasn’t considered industry standard yet, kickbacks, etc.
We probably had similar rigorous educations at that level. In SE we
studied things like the '87 Wall St. crash versus Therac-25. The
questions I remember were always around what "could or should" have
been known, and crucially... when. Sometimes there's just no basis
for making a "calculated risk" within a window.
The difference then, morally, is whether the harms are sudden and
catastrophic or accumulating, ongoing, repairable and so on. And what
action is taken.
There's a lot about FB you say that I cannot agree with. I think
Zuckerberg as a person was and remains naive. To be fair I don't think
he ever could have foreseen/calculated the societal impact of social
media. But as a company I think FB understood exactly what was
happening and had hired minds politically and sociologically smart
enough to see the unfolding "catastrophe" (Roger McNamee's words) -
but they chose to cover it up and steer the course anyway.
That's the kind of recklessness I am talking about. That's not like
Y2K or Mariner-I or any of those very costly outcome could have been
prevented by a more thoughtful singular decision early in development.
I’m talking strictly about the day to day engineering of pushing code and accidentally breaking something which is what “move fast and break things” is about and how it was understood by engineers within Facebook.
You now have raised a totally separate issue about the overall strategy and business development of the company which you’d be right about - if it were required to have a PE license to run an engineering company, Zuckerberg would have to have had his PE license revoked and any PEs complicit in what they did with tuning for addictiveness should similarly be punished. But the lack of regulation in any engineering projects that don’t deal directly with human safety and how businesses are allowed to run is a political problem.
I see we agree, and that as far as day-to-day engineering goes I'd
probably care very little about whether a bug in Facebook stopped
someone seeing a friends kitten pics.
But on the issue I'm really concerned about, do you think "tuning for
addictiveness" on a scale of about 3 billion users goes beyond mere
recklessness, and what do we do about this "political problem" that
such enormous diffuse harms are somehow not considered matters of
"human safety" in engineering circles?
I think there are political movements to try to regulate social media. There’s lots of poorly regulated sub industries within the tech field (advertising is another one).
> Sure, the Post-Office/Fujitsu scandal, Boeing etc, show how bad software actually ruins lives, but for the most-part the externality imposed by the reckless software engineer is measured in "hours of minor inconvenience".
I've been deeply critical of the appalling behaviour of the Post Office and Fujitsu in the Horizon scandal but there's a world of difference between this and the impact of Facebook in 2009. One had a foreseeable and foreseen impact on people's lives. The other was a social network competing with MySpace and looking for a way to monetise its popularity.
> there's a world of difference between this and the impact of
Facebook in 2009.
You're absolutely right there,
Frances Haugen's leaked internal communications showed
incontrovertibly that internal Facebook research had long known teen
girls had increased suicidal thoughts and obtained eating
disorders. Facebook and Instagram products exploited teens with
manipulative algorithms designed to amplify their insecurities, and
that was documented. Yet they consistently chose to maximise growth
rather than implement safeguards, and to actively bury the truth that
their product caused deaths [0]. Similarly the Post Office had
mountains of evidence that its software was ruining lives yet engaged
in a protracted, active cover-up [1].
So, very similar.
But what's the "world of difference"?
> looking for a way to monetise its popularity.
That's a defence? You know what, that makes it worse. The Post
Office were acting out of fear, whereas Facebook acted out of vanity
and greed. The Post Office wanted to hide what had happened, whereas
Facebook wanted to cloak ongoing misdeeds in order to continue.
Simply despicable.
Way I see it - Facebook come out looking much, much worse.
The NPR link is from 2021, not 2009. It links out to research from 2019, still not 2009. In 2009, Facebook was still branching out among university students.
We were supposed to break; limits, barriers, status-quos, ossified ideas... Instead we broke; treasured social norms, privacy, mutual respect and dignity. There's a difference between benevolent innovation and reckless iconoclasm. I think it started the day Peter Thiel gave money to Mark Zuckerberg.