It's gratifying to see how successfully the same organization has learned from the debacle that was the rewrite from Netscape 4 to Mozilla in the first place. That time, they didn't release for years, losing market share and ceding the web to Internet Explorer for the next decade. Joel Spolsky wrote multiple articles[1][2] pointing out their folly.
This time, their "multiple-moonshot effort" is paying off big-time because they're doing it incrementally. Kudos!
Joel is making two separate claims there, though he doesn't cleanly distinguish them.
One is that rewriting from scratch is going to give you a worse result than incremental change from a technical point of view (the « absolutely no reason to believe that you are going to do a better job than you did the first time » bit).
The second is that independent of the technical merits, rewriting from scratch will be a bad commercial decision (the « throwing away your market leadership » bit).
We now know much more about how this turned out for his chosen example, and I think it's clear he was entirely wrong about the first claim (which he spends most of his time discussing). Gecko was a great technical success, and the cross-platform XUL stuff he complains about turned out to have many advantages (supporting plenty of innovation in addons which I don't think we'd have seen otherwise).
It's less clear whether he's right about the second: certainly Netscape did cede the field to IE for several years, but maybe that would have happened anyway: Netscape 4 wasn't much of a platform to build on. I think mozilla.org considered as a business has done better than most people would have expected in 2000.
I think we can say that Gecko ended up technically better than incremental changes to Internet Explorer, which I think was starting off from a more maintainable codebase than Netscape 4. That's hardly conclusive but it's some evidence.
Indeed. My intuition based nothing other than my subjective experience is that there are times that throwing away the code is the correct decision, but they are a rounding error compared to the times that people want to throw away the code, so to a first order of approximation "never start over from scratch" is correct.
Simply shifting the burden of proof to the "rewrite" side is usually sufficient. Where I currently work a rewrite request is not answered with "no" but "show me the data." 90% of the time the requester doesn't even try to gather data, the other 10% of the time results in some useful metrics about a component that everyone agrees needs some TLC, whatever the final decision.
> we can say that Gecko ended up technically better than incremental changes to Internet Explorer
> ...
> That's hardly conclusive but it's some evidence.
Given Internet Explorer's monopoly position, and consequent disincentive to compete, it's not really the best comparison.
Compare to something like Opera Presto, a codebase that - while younger than Netscape - predates Internet Explorer, which underwent incremental changes while remaining in a minority market position. It was killed by market forces, but I doubt anyone would contest it was a badly put together piece of software in the end.
Konqueror is another example. It's not quite as compelling, as KHTML itself has fared less well than its forks, Safari WebKit has never exactly been a leader in engine advancement, and Chrome's innovations, while incremental, were largely built on a scratch-rewrite-and-launch-all-at-once of one component (V8). However KHTML/Webkit/Blink is still pretty much an incremental rewrite success story.
I actually used Opera because it allowed me to stop websites from hijacking my browser. No capturing my right click or any of this silly bullshit. The exact UI/features I want. Opera 6-12 were good times.
Yeah but Microsoft let IE stagnate for half a decade after it had achieved dominance. There were 5 years between IE 6 and IE 7. If MS hadn't disbanded the IE team but had kept up the pace of incremental development, I very much doubt Mozilla would have ever been able to catch up again.
I would say Joels point still holds in the general case where you you can't count on competitors to just stop developing their product until you have achieved parity again.
And, as far as I know, Microsoft has developed Edge from scratch after all. The years of incremental updates to IE are now maintained for legacy support.
That’s incorrect. Edge is still Trident, but they took the opportunity of a change in name to rip out all of the old backwards-compatibility layers and leave just the modern stuff.
I disagree on item 1. My basis for this occurred to me on a contract a few years ago, as I was being scolded for fixing things instead of letting them be because “we’re going to do a rewrite soon” (even though the boss that promised this got promoted out of the org).
The promise of a rewrite institutionalized accumulating technical debt. When it comes time to do the rewrite, everyone starts out on the wrong foot. The big rewrite is a lot like New Years resolutions. They don’t stick, they cost money and time, create negative reinforcement and sometimes people get hurt.
Refactoring institutionalizes continuous improvement, thinking about code quality, and yet discourages perfectionism because you can always fix it later when you have more info. My theory is that people good at refactoring can handle a rewrite well. Maybe are the only people that can handle a rewrite well. But if you can refactor like that you don’t need a rewrite (or rather, you’ve already done it and nobody noticed).
I think there is something to be said for the technical & market advantage of “rewrites” in the sense of “very ambitious but still incremental refactorings”. Literally rewriting all the code from scratch is likely to be a mistake, but there’s a spectrum from “edit” through “refactoring” to “rewrite”, and it can pay to push toward the more aggressive end of that spectrum sometimes, if you know precisely what you’re doing.
That is, some of my projects (personal & professional) have benefited enormously from designating the old version as a “prototype”, creating a “new” project with slightly different architectural decisions based on the deficiencies of the old version, copying most of the code from the old version to the new, and filling in the details.
The original plan was for the Servo project was to develop a new browser engine separate from Gecko, similar to the original Mozilla transition. An alpha-quality browser with Servo as an engine was originally a 2015 goal, and the active goal tracking this was removed from the roadmap in 2016:
As someone who was involved with Servo during that time period, I was disappointed at the time that it was quite obviously not going to happen. However, looking at what has happened since then, the change of focus towards Firefox integration was definitely a good move.
We all due respect, browser.html can do so little that it's hardly more than a prototype. Also, Servo itself as a web engine is still far from supporting enough of the web stack to be usable in a daily browser.
I guess putting effort in Stylo diverted some resources from Servo - I feel it didn't progress much the last year in terms of "sites I can browse for more than 5 minutes". At this point it's clear that Mozilla is unlikely to staff Servo to fill the gap (although cpearce and others are working on getting the gecko media stack in Servo), but I wonder what will happen once WebRender makes its way in Gecko; are there any other pieces of Servo that can be useful?
> are there any other pieces of Servo that can be useful
I definitely think Servo layout can be made useful. There have been no indications that parallel layout doesn't work in practice. Our performance tests have had good results, comparable to the results we saw in Stylo.
I would like to redo the way Servo layout scheduling works, possibly when async/await lands in Rust. I believe that will not only improve performance but also fix a whole bunch of random bugs relating to floats where our scheduler currently just does the wrong thing. (That doesn't mean we have to throw out all of that code or anything like that; it just means I'd like to redo how that works.)
Once you have layout, style, and graphics out of the way, what remains—really, at that point, the difference between Gecko and Servo, aside from embeddability—will mostly be DOM APIs and relatively isolated components such as image decoders. It's an open question how to use Servo DOM API implementations in Gecko. In any case, though, Servo's DOM story may change with the arrival of Alan Jeffrey's Josephine or something like it, and it probably makes sense to revisit this question once we decide how all of that will shake out.
And, personally, I don't know if you consider Pathfinder part of Servo or not, but I'm certainly aiming to get that in.
One thing's for sure: we won't be running out of things to do in Research any time soon. :)
Wasn't the original Mozilla leading into Firefox wildly successful, though? It certainly put a dent in IE's market share. The opposite has been true for Chrome so it seems their recent history is not one of successful adaptation.
The first thing to consider is that MS let IE stagnate, and had large deviations from W3C standards. Mozilla rode in on the cry for standard adherence.
Chrome on the other hand is not stagnant, far from it. And Google is not letting Chrome stagnate.
Never mind that Google is using their marketing muscle to push Chrome every chance they get.
Just try downloading some shareware on Windows, and you are bound to get Chrome along for the ride. Hell, even Adobe offers Chrome as a Flash bundle even though Google is using Chrome to effectively kill Flash...
While Google has not let Chrome stagnate, the Chrome today is no longer the Chrome when it was released. Chrome has definitely felt slower and heavier over the years. Google today is no longer the Google it once was*, and it is getting lots of sticks over privacy problem. Firefox happens to be in the right place at the right time.
I'm curious - in both of those articles Joel uses the expression "software doesn't rust", to make fun of Netscape.. is that where the name for the Rust language came from?
The name has multiple origins, Graydon notoriously gave a different story each time he was asked.
One of the origins mentioned is that Graydon thought that rusts (a kind of mushroom) were pretty cool (they are! they have a super complex lifecycle and it's pretty interesting).
Another is that Rust is actually not something with new concepts in it, everything in Rust is a very well established concept from earlier PL research, Rust just packages them nicely.
I noticed that too and was curious. A quick search doesn't say that's where it came from, but I agree it would be ironic. Maybe that planted a seed in someone's mind.
> Stylo was the culmination of a near-decade of R&D, a multiple-moonshot effort to build a better browser by building a better language.
This is the most impressive, and useful aspect of all the recent work in Firefox. Rust is an amazing language. It really brings something new to the table with its borrow checker.
The fact that rust was created as part of a greater effort to work on a web browser is amazing.
What I wonder, and I do not mean this in negative way, is whether this would have happened in a more commercially oriented organisation. Mozilla remains a foundation, and I consider Rust a fruit of their labour in itself.
To put it another way, I find it hard to justify developing Rust just for a web browser. But if you consider it from the perspective of a foundation developing tools for the developer community as a whole, it makes much more sense.
It's certainly true that corporations do put a lot of work into languages and runtimes. Apple created LLVM and clang, Microsoft created .NET and CLR with C#, F#, VB.NET, etc.
These projects were valuable to Apple and Microsoft for a variety of reasons:
* promoting their IDE: XCode builds faster, and has better error messages. You can use any .NET language with Visual Studio in the same project.
* promoting their platform: Objective-C and Cocoa let you create fast GUI apps in a standard way, and we don't need GCC anymore. .NET provides a useful feature-complete standard library over a variety of languages.
To contrast, Rust was made with the intention of simply making a better systems language. Rust doesn't have a standard library or environment tied to a specific OS or proprietary dependencies. Rust itself doesn't promote Windows, OS X, ASP.NET, Cocoa, IOS, Android, etc. That is what makes it seem much less likely that rust would be created by a corporation.
To be specific, LLVM started as an academic project by Vikram Adve and Chris Lattner in 2000. Apple hired Chris Lattner in 2005 to work on LLVM for Apple. Clang, though, does appear to have been an Apple project, being introduced by Steve Naroff of Apple in 2007 as an open-source project.
Indeed. Apple often also gets flak for taking KHTML and running with it, but footing the bill for WebKit development was a good thing for the Internet in general.
The browser is the only program on my computer that has a complexity that scares me.
I can look at the Linux source and figure out what is going on. There are some hard parts (synchronisation stuff and virtual memory is quite opaque on the first look) The code to handle layout and document processing in libreoffice is hairy, but I think I could manage.
A browser on the other hand. Layout and years of accumulated corner cases (handling the infinite variety of bad code out there) proper CSS rendering, multiple JITs, a shit-tonne of state and sandboxing of things that weren't meant to be sandboxed in the first place. Most, or maybe even all at the very bleeding edge of CS research.
The environment in which Erlang was developed was very different from the environment today. There were no third-party 4GLs available targeting the niche that Ericsson wanted, and there are good reasons to not want to use C in a telephone exchange.
The majority of programming languages used across the industry have had commercially oriented organisations behind them, including all the C family of programming languages.
Well, it's been in development for at least 7 years, without any product until the url parser appeared. That's far greater of an investment with no return than I've seen at any software enterprise outside of Google X, Microsoft Research, and Intel's various funding efforts, and each of those arguably have had more projects cut off before they return than they have had projects that succeeded in generating some return.
Enterprises tend to invest in <5 year projects, I've noted, and 5 years is a hell of a long time for an extended investment.
> Well, it's been in development for at least 7 years, without any product until the url parser appeared.
This is inaccurate, the URL parser never was and still isn't in a released Firefox.
The first Rust in a released firefox was the mp4 metadata code.
It's worth noting that in those 7 years Servo advanced a lot, which meant that the Stylo project didn't have to rewrite a style system, just take an existing one and polish it. (This is still a lot of work, but not as much)
Well, I'm certainly happy to admit my details are incorrect, but I think my broad point still stands—they look longer term than the typical commercial offering.
This is pretty much just the purview of R&D departments in general, which includes Mozilla Research. It's just a happy coincidence that, thanks to open source, software companies are relatively incentivized to share their projects with the public rather than keeping them proprietary, which is the default tack for R&D units in other industries.
Well, I'm not even necessarily trying to soap box here about open source or free software per se—I do think commercial/proprietary research has value to society as a whole, albeit less value. For instance, take Big Table—enormously influential and, I think, beneficial to society in spite of being largely closed off to the public. However, rust is way better for everyone, and I find it shocking it came from such a relatively small organization.
It doesn't really matter that Mozilla's a small organization. All they had to do was provide strong leadership and management expertise, and entice the open source community to voluntarily join and advance the project accordingly. Which thereby lead to not just Mozilla, but a few other organizations joining in with developers of their own to collaborate together amongst each other, including an army of rogue volunteers that aren't backed by any organizations. That's just not something you'll ever see from a corporation that has to answer to greedy shareholders that only care about ROI figures, especially short-term ROI figures.
In addition to this, Mozilla is hardly the size as Microsoft or Google. A more commercially focused version of Mozilla would probably have dedicated their resources elsewhere.
Actually, Java has made programming easier compared to C++ with 'elegant Syntax' (similar to C++) and 'sensible semantics' (similar to Smalltalk) - especially 'without pointers'!
And probably, Kotlin has made it even simpler than Java.
So, it's an evolving process.
On the other hand, JavaScript - though still painful - has no alternatives... hence, JavaScript is still OK - without any close competitor!
Let's imagine some companies (like Sun Microsystems) making wonderful language (like Java), and provide NO IDEs. We really had tough time during the initial years of Java - just with Borland JBuilder and some other primitive IDEs - until we could get wonderful IDEs like Eclipse followed by IntelliJ-IDEA.
It's widely understood that you're referring to JetBrains' Kotlin. In such case, really industry-famous-IDE support for a 'new language', is some sort of a gift!
Besides, why would a company want to invest in R&D to create a language and just give it for FREE, without any tangible business benefits.
Remember, Sun did so... giving away Java for free. Eventually - under stress - they themselves were sold (and bought by Oracle).
Rusts borrow checker, ownership and whole model is quite complicated actually. Remember we are going for a complete, sound model without holes.
There have been soundness bugs during development and I think we have some really amazing people behind the core language that have been able to refine and adapt rust as it grows.
You're welcome! I guess while I'm at it I should mention that I am a huge fan of your HN comments also. I learn more from them on a technical level than probably anyone else on this site.
Yeah. I got a taste of Rust in the last two years and it's the first tool set I really want to continue using. And it gave me the interest to dig into systems programming. I'm very grateful for that!
Yeah - it's really inspiring to see the grit and determination poured in over many years starting to pay off. Truly great software engineering happening over at Mozilla!
> They borrowed Apple’s C++ compiler backend, which lets Rust match C++ in speed without reimplementing decades of platform-specific code generation optimizations.
This was a pretty smart move by the Rust team, and this gave them a rock solid platform to go cross-platform. In words of Newton, "If I have seen further it is by standing on the shoulders of giants". Kudos team Rust, and let's hope they eat C++'s lunch soon.
Apple "just uses LLVM" in the same way Apple "just uses Webkit".
Apple hired Chris Lattner back in 2005, just as he completed his PhD. At the time, LLVM could just barely compile Qt4[0] (the result didn't quite actually work yet) and was still hosted on CVS.
Lattner had to make LLVM production-ready (rather than a research project), add a bunch of major features, start a C/C++/ObjC frontend (Clang) and create a team of LLVM developers in the company.
Apple shipped their first bits of LLVM-based tooling in Leopard in 2007.
LLVM started out as a research project at the University of Illinois, but Apple hired the lead developer in 2005 and put a ton of work into making it good enough to replace GCC. Apple also originally wrote Clang, LLVM's C/C++/Objective-C frontend, though Rust doesn't directly rely on this part.
Calling it "Apple's" threw me off too, but it's not entirely misleading because without Apple, it might not have become a production-ready compiler. At the least, I would say Apple did more than "just use it".
Apple doesn't "just use it". LLVM was originally developed at an university, then made production ready at Apple, and is still very much influenced by Apple, as Apple bases almost everything they make on it.
It's slightly oversimplified. Apple hired Chris Lattner from academia to ramp up work on LLVM & create Clang. (Apple always hated GNU stuff so the motive might not have been purely technical.)
The Objective-C story is a little more complicated. Originally Apple had a hostile fork of GCC and even initially refused to provide source code until the FSF lawyers got involved. It's a small wonder the GCC Objective-C support is as good as it is considering the politics.
Debugging info works similarly in all compilers (GCC, MSVC etc) - it's saved in the compiler output and read by the tools like IDEs.
>For instance, GCC uses a step called fold that is key to the overall compile process, which has the side effect of translating the code tree into a form that looks unlike the original source code. If an error is found during or after the fold step, it can be difficult to translate that back into one location in the original source.
Are you being sarcastic? Because Android is very much Google's. Is there anyone else who has anywhere close to as much influence on the development of Android as Google has? Android is for the most part being developed behind closed doors at Google. About once a year they lob over a bunch of new code over the fence, and everyone else gets to develop from there.
It's an argument from absurdity. Nobody would say Android isn't Google's project, even though it wasn't started by Google, and wasn't written exclusively by Google employees.
Same applies to LLVM. The only reason it might not seem a fair comparison is that Apple has run LLVM as a truly open source endeavour, whereas Android (which marketed itself to geeks as the more open platform) has, as you rightly point out, been run as a closed-source project with occasional begrudging nods to its ever-shrinking open source subset.
It might sound very naive to say this, but I found it very cool that it was someone from the States, an Aussie and a Spaniard working on this, open source is something magical when you think about it. Props to everyone involved, all those projects sound like a lot of fun for a good cause.
Both the stylo and servo teams are extremely distributed.
For servo we've had long periods of time where no two employees are in the same office (it helps that most folks are remote).
In both teams we found it impossible to pick meeting times that work for everyone so we instead scheduled two sessions for the recurring meetings, and you can show up for one of them (the team lead / etc are usually at both so they're scheduled such that it is possible for them to attend both). Though for Servo we eventually stopped the meeting since we're pretty good at async communication through all the other channels and the meeting wasn't very useful anymore.
Also, to clarify, while that was the initial team, shortly after getting Wikipedia working the team grew with folks from the Servo and Gecko teams. Eventually we had folks living in the US, Paris, Spain, Australia, Japan, Taiwan, and Turkey working on it. As well as volunteer contributors from all over the place.
I love Firefox Quantum and it has replaced Chrome as my browser at home. It's memory consumption is far lower with the same amount of tabs open.
That said, why does it perform slower than Chrome on most benchmarks? Is it due to the Chrome team doing much more grunt work regarding parallelism and asynchronous I/O? Or are there still features in the current Firefox build that still call the original engine?
Which benchmarks are you talking about? It depends on what those benchmarks measure.
For example, a lot of the Quantum work was in user-percieved UI latency; unless the benchmark is measuring that, and I imagine that's a hard thing to measure, it's not going to show up.
> Does Rust have a runtime penalty as Golang does?
I'm embarrassed to say that I just blindly trusted a couple of websites' claims that they ran benchmarks, without verifying they're even industry-standard. The articles were on Mashable and AppleInsider.
Mashable tested webpage load times, which only one dimension of many to optimize for. AppleInsider looked at speed, CPU usage, and memory consumption.
No worries! Benchmarking is super hard, generally, and browsers are so huge, there's tons of things that you can measure. I'm not trying to say that you shouldn't trust any of them, just that that might be the source of discrepancy between what you've experienced and those numbers.
It also true that sometimes it's slow! There's always more to do.
Not sure if this is normal, but I have very noticeable lag in the search/address bar autocomplete which does make the whole browser feel a bit slow (MacOS Sierra, using Developer Edition).
And since we are here, the prompt/dialog windows in FF are still not native looking too. These are my two major complaints :)
While I could learn C++ or Rust in 10 years, I'm not going to do it for a bug that isn't even biting me any more. I've long since worked around it. It makes more sense for me to donate to Mozilla so they can hire someone who knows what they're doing.
I've noticed this on Windows. The new Firefox seems to be a bit more chatty to the disk. If some other process is hammering the disk then autocomplete gets really laggy. I suspect it is optimized for SSDs.
On the other hand, if the disk is idle it is blazing fast.
Reminds me of behaviors i have seen on Android for the longest time (to the point that i have effectively given up on using Firefox on Android because it slows down so easily there).
One thing I've been wondering is that Stylo and Webrender can parallelize CSS and Paint, respectively, but I haven't seen any mention in Project Quantum (the project to integrate Servo components into Firefox/Gecko) of any component to parallelize layout, which is probably the biggest bottleneck on the web at the moment.
Is parallel layout something which can only be done through a full rewrite, hence with Servo, and bringing Servo up to full web compatibility, or can this be handled through the Project Quantum process, of hiving off components from Servo into Firefox?
Now, once stylo and webrender are in play, ideally layout can just fit in between. All the interfacing already exists from Servo.
However, there are a lot more things in Firefox that talk to layout. This would need work, more work than Stylo.
But this isn't the major issue. The major issue is that Servo's layout has a lot of missing pieces, a lot more than was the case with its style system. It's hard to incrementally replace layout the way webrender did with rendering (fall back when you can't render, render to texture, include it).
The OP links a video from 2015 that implies that one of the advantages of making Stylo the first Servo component in Gecko is that the next phase in the pipeline, layout, will be able to benefit from having a well-defined interface in place. I'm curious about this as well!
Since I gave that talk, it's become more clear to me that servo's layout engine is a lot farther from feature-complete than the CSS engine was. So my hunch is that the granularity of incrementalism we used for stylo may not be workable for layout.
That said, we are absolutely going to explore opportunities for more Rust/Servo in layout, so we just need to find the right strategy. One incremental step I'm interested in exploring is to rewrite Gecko's frame constructor in Rust using the servo model, but have it continue to generate frames in the Gecko C++ format. This would give us rayon-driven parallelism in frame construction (which is a big performance bottleneck), while being a lot more tractable than swapping out all of layout at once. Another thing to explore would be borrowing Servo's tech for certain subtypes of layout (i.e. block reflow) and shipping it incrementally.
Each of these may or may not share code with Servo proper, depending on the tradeoffs. But Servo has a lot of neat innovation in that part of the pipeline (like bottom-up frame construction and parallel reflow) that I'm very interested in integrating into Firefox somehow.
We're going to meet in Austin in a few weeks and discuss this. Stay tuned!
Speaking of which: does anyone know if some new optimization land in the beta versions a couple of days ago? Or if some bug that caused delays on Linux got fixed?
I updated my developer version yesterday and it was as if Firefox - already ludicrously fast compared to before - turned on the turbo booster.
It's only on nightly and still buggy on certain chipsets. At home I have a Kaby Lake system and using WebRender for daily browsing without problems. At work I use a Skylake system which has trouble with some sites, such as https://tradingview.com
If you want to read the latest info, there is a good status post from a couple of days ago:
Feel free to file a bug report with your hardware, Firefox version, and test case, yes. These often boil down to simple bugs (hardware-specific or page-specific) that can be quickly fixed when isolated.
The medium-term effort to revamp the graphics stack is WebRender. Note that, like Stylo, WebRender is not just meant to achieve parity with other browsers. It's a different architecture entirely that is more similar to what games do than what current browsers do.
Kind of OT, but I've noticed both latency and battery-life hits for compositing WMs in X (compared to traditional WMs).
Are those things being measured at all in FF? It may be that the tradeoff is worth it (and I have no doubt it can be done better than the median compositing WM on linux), but it would be good to have that data.
On the other hand, it may be moot if Wayland does end up taking over from X.
(Nitpick: Traditional WMs composite as well, they just do it on the CPU instead of the GPU.)
That's interesting. I remember the developer of KWin (KDE's window manager) saying that he considered disabling GPU compositing when the battery runs low, but he couldn't prove that this actually saves energy. In fact, on some configurations, GPU compositing was less power-intensive than CPU compositing.
No Qt or GTK application sends draw commands to the X server anymore (except for the final "draw window-sized pixmap"). That only applies to xterm or maybe Tcl/Tk stuff.
This still isn't compositing. If I have 10 windows all with the same X/Y coordinates, the only window sending any draw commands at all is the top window with a traditional wm.
Don't forget "race to sleep". If the GPU takes 3x more power but completes 10x faster and can go back into low power mode sooner that could be another power savings.
Those are very general conclusions to draw from one test case. The bouncing ball test runs at 60 FPS for me on macOS; most of the time is spent in painting, as expected. Likewise, Stripe scrolls at 60 FPS for me.
I should note that the bouncing ball test is the kind of thing that WebRender is designed to improve—painting-intensive animations—so it's obviously untrue that there's no interest in improving this sort of workload. When running your bouncing ball test in Servo with master WebRender, I observed CPU usage of no higher than 15% (as well as 60 FPS)…
> With the page at stripe.com, I don't see any difference between FF52ESR, FF56.0.2, FF57.0b14 with servo enabled, or with it disabled. On my 2012 Macbook Air with macOS 10.12, I see about 98-108% CPU on each, according to iStat Menus CPU meter. With Safari it's about 35%. That's exactly what I would expect based on past experience.
> I would probably close this as invalid, as it's not something new or specific to Quantum or Servo, or as a duplicate of one of the older bugs, though I'm not sure which.
That likely points as to why there's little movement on this bug. It's title can be interpreted to indicate a Quantum regression, but it's a general issue that's longstanding, so it may be the people that are seeing it are not focused on it (they're likely tracking down and fixing actual regressions, not known problems).
I know that doesn't help your issue, but it may help you locate the relevant bug report and lend your weight there, if you feel so inclined.
I am still a little sceptical of WebRender. Not for its theory or implementation, but relying on Graphics Drivers and GPU to do the work continue to be a pain. There are lots of Laptop with Graphics Drivers that doesn't ever get updated.
Stylo is now enabled in Firefox for Android's Nightly builds. You can install Nightly from the Google Play Store to see if Stylo makes a difference for the animation problems.
This blog has a good example of the jank you are reporting. I am observing noticeable jank on the "slideup" animation on #fixedcontent.firstvisit vs. Chrome (62.0.3202.94). (macOS/10.12.6 FF/57.0 late-2013 MBP)
> For example, register allocation is a tedious process that bedeviled assembly programmers, whereas higher-level languages like C++ handle it automatically and get it right every single time.
Ideal register allocation is NP-complete, so a compiler can't get it right every single time.
I'm not sure how good in practice modern compilers are at this, but would be curious to know if there's some asm writers who can actually consistently outperform them.
They're not saying that C++ compilers do the best possible register allocation, they're saying that C++ compilers generate a register allocation that works and doesn't contain bugs. Technically, spilling everything to memory and loading only what you need to issue instructions is "getting it right" by this definition. No compiler strives to get the "optimal" anything in the general case, but we do expect them to strive to be "correct" in all cases. The language we use determines which properties are included in our idea of "correctness".
I found this example interesting. I found myself comparing it to how Rust does memory management, which is certainly not "automatic" as that would describe a garbage collected language.
Optimal register allocation has been polynomial time for more than 10 years - for some definition of optimal. IIRC it started with programs in SSA form and has dropped that requirement more recently. Modern GCC uses SSA form and I think LLVM might too.
GCC and LLVM do not retain SSA form by the time register allocation happens (they both convert to a non-SSA low-level IR before then).
It's also worth pointing out that "optimal" in theory doesn't necessarily correspond to optimal in practice. The hard problem of register allocation isn't coloring the interference graph (since there's not enough registers most of the time), it's deciding how best to spill (or split live ranges, or insert copies, or rematerialize, or ...) the excess registers. Plus, real-world architectures also have issues like requiring specific physical registers for certain instructions and subregister aliasing which are hard to model.
In practice, the most important criterion tends to be to avoid spilling inside loops. This means that rather simple heuristics are generally sufficient to optimally achieve that criterion, and in those cases, excessive spilling outside the loops isn't really going to show up in performance numbers. Thus heuristics are close enough to optimal that it's not worth the compiler time or maintenance to achieve optimality.
Yes, the fun in compilers: Even if every phase and every optimization actually produces optimal results, the combination is probably not optimal.
One deep problem is that there is no good optimization goal. Today's CPUs are too complex and unpredictable performance-wise.
Another problem is: Register pressure is one of the most important things to minimize, but how can the phases before register allocation do that? They use a surrogate, like virtual registers, and thus become heuristics even if they solve their theoretical problem optimally.
> IIRC it started with programs in SSA form and has dropped that requirement more recently.
No, it's only SSA form that has optimal register allocation in polynomial time, otherwise someone would've proved that P=NP (as its proven to NP-Hard). :)
That said, finding minimal SSA from arbitrary SSA is NP-Hard.
> Modern GCC uses SSA form and I think LLVM might too.
LLVM has always used SSA (this is relatively unsurprising given its origins in research and so much research of the period being on SSA).
I really like the new Fox. I’ve tried switching over completely but I think it’s causing some random BSODs on my Latitude E5570. The laptop does have a second Nvidia graphics card, for which there is no driver installed. ( don’t ask :) I’m perfectly fine with the onboard Intel and I much prefer the extra hours of battery life)
> The teams behind revolutionary products succeed because they make strategic bets about which things to reinvent, and don’t waste energy rehashing stuff that doesn’t matter.
This is a great write-up that gives me warm fuzzy feelings.
What also is interesting for me to realise, though, is that a lot of this was happening at the same time as Mozilla was largely focused on Firefox OS, and receiving a lot of flak for that.
It's a shame that Firefox OS failed, but it was clear that they had to try many different things to remain relevant, and it's excellent to see that one of those is very likely to pay off. Even though Rust might've been dismissed for the same reasons Firefox OS was.
FF has for me crashed more times in the last week than in the previous year. - Multiple installs on different Linux systems. The last crash was with a clean profile.
And then there's the disappearing dev tools - that's fun.
EDIT:
I hope that there is something weird with my systems. But I fear that the rush to push this out might have been a little hasty.
EDIT EDIT
Apart from the crashes the new FF has been nice. I've been able to largely stop using chromium for dev work - so not all is bad.
You can go to "about:crashes" to get some more information about reported crashes. If you open a crash report and click the "Bugzilla" tab, you can find out if a bug is on file for that specific stack trace.
I have been using Firefox for several months, using Windows 10, and several GNU/Linux distributions, different hardware, etc., and have never experienced a crash.
It's definitely something weird to do with your systems, meaning it's a real bug that you are experiencing, and I am not.
So please share crash reports, and file bug reports. Different hardware/software quirks may reveal bugs in Firefox/Linux/drivers/window managers/anything. By submitting a bug report for Firefox, you may be able to help find a video driver bug, etc.
For example, register allocation is a tedious process that bedeviled assembly programmers,
Yet more propaganda. I’ve been part of the cracking and demo scene since my early childhood. If you didn’t code in assembler you might as well not have taken part in it at all, because small fast code was everything. None of us ever had an issue with register allocation, nor do we face such issues today. Not 30+ years ago, not now.
> the breadth of the web platform is staggering. It grew organically over almost three decades, has no clear limits in scope, and has lots of tricky observables that thwart attempts to simplify.
It would be great to create the html/css/javascript stack from scratch, or at least make a non-backwards-compatible version that is simpler and can perform better. HTML5 showed us that can work.
Yeah but Firefox is already struggling while supporting all the possible standards and more ("sorry our site is better view with Google IE4... ehm Google Chrome").
The whole Mozilla strategy of corroding Firefox piece by piece is actually very professional. Big backwards-incompatible transitions in technology almost always fail.
> sorry our site is better view with Google IE4... ehm Google Chrome
FWIW this is usually due to folks doing performance work in only one browser or not really testing well and slapping that label on after the fact.
Or stuff like Hangouts and Allo where they use nonstandard features.
The major things Firefox doesn't support that Chrome does are U2F (it does support it now, but flipped off, will flip on soon I think) and web components (support should be available next year I guess; this kinda stalled because of lots of spec churn and Google implementing an earlier incompatible spec early or something.)
The reason the URL parser work is taking long is not because it's complex, rather it's because it's stalled. URL parsing is complex, however all this complexity was already dealt with when the Servo team wrote the rust-url crate ages ago, so it's not a factor here.
The URL parser integration was a proof of concept. It doesn't really improve stuff (aside from a slight security benefit from using Rust) so there wasn't pressure to land it; it was just a way of trying out the then-new Rust integration infra, and inspiring better Rust integration infra.
One of the folks on the network team started it, and I joined in later. But that person got busy and I started working on Stylo. So that code exists, and it works, but there's still work to be done to enable it, and not much impetus to do this work.
This work is mostly:
- Ferreting out where Gecko and Servo don't match so that we can pass all tests. We've done most of this already, whatever's left is Gecko not matching the spec, and we need to figure out how we want to fix that.
- Performance -- In the integration we currently do some stupid stuff wrt serialization and other things; because it was a proof of concept. This will need to be polished up so we don't regress
- Telemetry -- before shipping we need to ship it to nightly in parallel with the existing one and figure out how often there's a mismatch with the normal parser
As someone who once tried to write code to do it to avoid pulling in a dependency.
Never again, it's not just that the spec is 60 pages long but that the actual behaviour out in the real world is miles away from the spec, the web is a complex place where standards are...rarely standard.
URLs have been a security issue for browsers in the past, and can get pretty hairy. From UTF-8 coded domain names to whatever you want to "urlencode". For example, you can encode whole images into URLs, for embedding them in CSS files.
Old IE versions had a hard URL length limit and were very picky with the characters in domain names, both limitations included as "security fixes" (which broke the standards).
I'd say the change of the encoding stack to encoding-rs is pretty significant; while it's not that much code it's stuff that gets used throughout the codebase.
It's a matter of allocating time to implement the missing parts and get it to work properly. Right now the people who could do this are working on other things but it will get done eventually.
I would find frequent cases where my system would stall for 10-20s (could not caps lock toggle, pointer stopped moving). I almost always have just Chrome and gnome-terminal open (Ubuntu 16.04). I had attributed it to either a hardware or BIOS/firmware defect.
Now, after switching to Firefox I have gone a week without seeing those stalls.
YMMV -- I never bothered to investigate, it could be something as simple as slightly-less-memory-consumption from FF, or still a hardware defect that FF doesn't happen to trigger.
This sounds vaguely like what I've been experiencing on recent Chrome versions. On Windows I've had Chrome randomly hang... initially on the network, then after a few seconds even the UI freezes. When that happens, if I launch Firefox and try to access the network, it hangs too. But programs that don't try to access the network don't hang. Then after maybe (very roughly) 30 seconds, it all goes back to normal. No idea what's been going on but it seems like you might be experiencing a same thing, and it seems like a recent bug on Chrome versions, not a firmware issue... I'm just confused at how it affects other programs. It didn't use to be like this just a few weeks ago.
I am most definitely not running out of memory or having other programs active. I easily have like > 10 GB free RAM and it happens when nothing else is open.
Like I was suggesting early -- my habits haven't changed. It's started doing this quite recently. It wasn't like this a few weeks ago.
Just a shot in the dark, but do you have an nVidia GPU? Some drivers caused hangs with Chrome when GPU acceleration/rasterization was enabled in the browser settings.
Installed the new firefox, had 1 tab running for a few days which had allocated more than 10gb of virtual memory. I had high hopes but im sticking with chrome.
No, it doesn't necessarily correspond. It could be an indicator, though. RSS would be more useful, IMO.
That said, even if the poster is correct, it isn't necessarily wrong either. AFAICT, nothing stops JS on a page from allocating that much memory, and "leaking" it (e.g., holding on to the JS object, maybe accidentally in a giant list, and not making use of it). It isn't the browser's fault if JS is actually "using" that much RAM.
This time, their "multiple-moonshot effort" is paying off big-time because they're doing it incrementally. Kudos!
[1] https://www.joelonsoftware.com/2000/04/06/things-you-should-...
[2] https://www.joelonsoftware.com/2000/11/20/netscape-goes-bonk...