It's gratifying to see how successfully the same organization has learned from the debacle that was the rewrite from Netscape 4 to Mozilla in the first place. That time, they didn't release for years, losing market share and ceding the web to Internet Explorer for the next decade. Joel Spolsky wrote multiple articles[1][2] pointing out their folly.
This time, their "multiple-moonshot effort" is paying off big-time because they're doing it incrementally. Kudos!
Joel is making two separate claims there, though he doesn't cleanly distinguish them.
One is that rewriting from scratch is going to give you a worse result than incremental change from a technical point of view (the « absolutely no reason to believe that you are going to do a better job than you did the first time » bit).
The second is that independent of the technical merits, rewriting from scratch will be a bad commercial decision (the « throwing away your market leadership » bit).
We now know much more about how this turned out for his chosen example, and I think it's clear he was entirely wrong about the first claim (which he spends most of his time discussing). Gecko was a great technical success, and the cross-platform XUL stuff he complains about turned out to have many advantages (supporting plenty of innovation in addons which I don't think we'd have seen otherwise).
It's less clear whether he's right about the second: certainly Netscape did cede the field to IE for several years, but maybe that would have happened anyway: Netscape 4 wasn't much of a platform to build on. I think mozilla.org considered as a business has done better than most people would have expected in 2000.
I think we can say that Gecko ended up technically better than incremental changes to Internet Explorer, which I think was starting off from a more maintainable codebase than Netscape 4. That's hardly conclusive but it's some evidence.
Indeed. My intuition based nothing other than my subjective experience is that there are times that throwing away the code is the correct decision, but they are a rounding error compared to the times that people want to throw away the code, so to a first order of approximation "never start over from scratch" is correct.
Simply shifting the burden of proof to the "rewrite" side is usually sufficient. Where I currently work a rewrite request is not answered with "no" but "show me the data." 90% of the time the requester doesn't even try to gather data, the other 10% of the time results in some useful metrics about a component that everyone agrees needs some TLC, whatever the final decision.
> we can say that Gecko ended up technically better than incremental changes to Internet Explorer
> ...
> That's hardly conclusive but it's some evidence.
Given Internet Explorer's monopoly position, and consequent disincentive to compete, it's not really the best comparison.
Compare to something like Opera Presto, a codebase that - while younger than Netscape - predates Internet Explorer, which underwent incremental changes while remaining in a minority market position. It was killed by market forces, but I doubt anyone would contest it was a badly put together piece of software in the end.
Konqueror is another example. It's not quite as compelling, as KHTML itself has fared less well than its forks, Safari WebKit has never exactly been a leader in engine advancement, and Chrome's innovations, while incremental, were largely built on a scratch-rewrite-and-launch-all-at-once of one component (V8). However KHTML/Webkit/Blink is still pretty much an incremental rewrite success story.
I actually used Opera because it allowed me to stop websites from hijacking my browser. No capturing my right click or any of this silly bullshit. The exact UI/features I want. Opera 6-12 were good times.
Yeah but Microsoft let IE stagnate for half a decade after it had achieved dominance. There were 5 years between IE 6 and IE 7. If MS hadn't disbanded the IE team but had kept up the pace of incremental development, I very much doubt Mozilla would have ever been able to catch up again.
I would say Joels point still holds in the general case where you you can't count on competitors to just stop developing their product until you have achieved parity again.
And, as far as I know, Microsoft has developed Edge from scratch after all. The years of incremental updates to IE are now maintained for legacy support.
That’s incorrect. Edge is still Trident, but they took the opportunity of a change in name to rip out all of the old backwards-compatibility layers and leave just the modern stuff.
I disagree on item 1. My basis for this occurred to me on a contract a few years ago, as I was being scolded for fixing things instead of letting them be because “we’re going to do a rewrite soon” (even though the boss that promised this got promoted out of the org).
The promise of a rewrite institutionalized accumulating technical debt. When it comes time to do the rewrite, everyone starts out on the wrong foot. The big rewrite is a lot like New Years resolutions. They don’t stick, they cost money and time, create negative reinforcement and sometimes people get hurt.
Refactoring institutionalizes continuous improvement, thinking about code quality, and yet discourages perfectionism because you can always fix it later when you have more info. My theory is that people good at refactoring can handle a rewrite well. Maybe are the only people that can handle a rewrite well. But if you can refactor like that you don’t need a rewrite (or rather, you’ve already done it and nobody noticed).
I think there is something to be said for the technical & market advantage of “rewrites” in the sense of “very ambitious but still incremental refactorings”. Literally rewriting all the code from scratch is likely to be a mistake, but there’s a spectrum from “edit” through “refactoring” to “rewrite”, and it can pay to push toward the more aggressive end of that spectrum sometimes, if you know precisely what you’re doing.
That is, some of my projects (personal & professional) have benefited enormously from designating the old version as a “prototype”, creating a “new” project with slightly different architectural decisions based on the deficiencies of the old version, copying most of the code from the old version to the new, and filling in the details.
The original plan was for the Servo project was to develop a new browser engine separate from Gecko, similar to the original Mozilla transition. An alpha-quality browser with Servo as an engine was originally a 2015 goal, and the active goal tracking this was removed from the roadmap in 2016:
As someone who was involved with Servo during that time period, I was disappointed at the time that it was quite obviously not going to happen. However, looking at what has happened since then, the change of focus towards Firefox integration was definitely a good move.
We all due respect, browser.html can do so little that it's hardly more than a prototype. Also, Servo itself as a web engine is still far from supporting enough of the web stack to be usable in a daily browser.
I guess putting effort in Stylo diverted some resources from Servo - I feel it didn't progress much the last year in terms of "sites I can browse for more than 5 minutes". At this point it's clear that Mozilla is unlikely to staff Servo to fill the gap (although cpearce and others are working on getting the gecko media stack in Servo), but I wonder what will happen once WebRender makes its way in Gecko; are there any other pieces of Servo that can be useful?
> are there any other pieces of Servo that can be useful
I definitely think Servo layout can be made useful. There have been no indications that parallel layout doesn't work in practice. Our performance tests have had good results, comparable to the results we saw in Stylo.
I would like to redo the way Servo layout scheduling works, possibly when async/await lands in Rust. I believe that will not only improve performance but also fix a whole bunch of random bugs relating to floats where our scheduler currently just does the wrong thing. (That doesn't mean we have to throw out all of that code or anything like that; it just means I'd like to redo how that works.)
Once you have layout, style, and graphics out of the way, what remains—really, at that point, the difference between Gecko and Servo, aside from embeddability—will mostly be DOM APIs and relatively isolated components such as image decoders. It's an open question how to use Servo DOM API implementations in Gecko. In any case, though, Servo's DOM story may change with the arrival of Alan Jeffrey's Josephine or something like it, and it probably makes sense to revisit this question once we decide how all of that will shake out.
And, personally, I don't know if you consider Pathfinder part of Servo or not, but I'm certainly aiming to get that in.
One thing's for sure: we won't be running out of things to do in Research any time soon. :)
Wasn't the original Mozilla leading into Firefox wildly successful, though? It certainly put a dent in IE's market share. The opposite has been true for Chrome so it seems their recent history is not one of successful adaptation.
The first thing to consider is that MS let IE stagnate, and had large deviations from W3C standards. Mozilla rode in on the cry for standard adherence.
Chrome on the other hand is not stagnant, far from it. And Google is not letting Chrome stagnate.
Never mind that Google is using their marketing muscle to push Chrome every chance they get.
Just try downloading some shareware on Windows, and you are bound to get Chrome along for the ride. Hell, even Adobe offers Chrome as a Flash bundle even though Google is using Chrome to effectively kill Flash...
While Google has not let Chrome stagnate, the Chrome today is no longer the Chrome when it was released. Chrome has definitely felt slower and heavier over the years. Google today is no longer the Google it once was*, and it is getting lots of sticks over privacy problem. Firefox happens to be in the right place at the right time.
I'm curious - in both of those articles Joel uses the expression "software doesn't rust", to make fun of Netscape.. is that where the name for the Rust language came from?
The name has multiple origins, Graydon notoriously gave a different story each time he was asked.
One of the origins mentioned is that Graydon thought that rusts (a kind of mushroom) were pretty cool (they are! they have a super complex lifecycle and it's pretty interesting).
Another is that Rust is actually not something with new concepts in it, everything in Rust is a very well established concept from earlier PL research, Rust just packages them nicely.
I noticed that too and was curious. A quick search doesn't say that's where it came from, but I agree it would be ironic. Maybe that planted a seed in someone's mind.
This time, their "multiple-moonshot effort" is paying off big-time because they're doing it incrementally. Kudos!
[1] https://www.joelonsoftware.com/2000/04/06/things-you-should-...
[2] https://www.joelonsoftware.com/2000/11/20/netscape-goes-bonk...