Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Maybe they just don't want to backport their backend code again and again for every LLVM snapshot they do, because that can be a tedious thing to do compared to just having it in origin/master forever for free.


Without the ability to test though they will still have to do a bunch of work in order to be able to use a new release. Possibly the release breaks their backend but nobody will know until google tries to use it.


If their contribution includes unit tests, it could be enough to guarantee most changes won't break it.


What makes that tedious? And can't it be fixed?


Suppose the LLVM devs change a function signature used by Google's private backend. If the backend is private, Google will have to update their usage of that API when they do a merge. But if the backend is present upstream when the LLVM devs make that change, then the LLVM devs are responsible for making sure all supported backends update their usage of the API.


But as people have noted, if you don't have the hardware, it's impossible to test that any changes actually work.

If we're just talking about pure refactoring that doesn't change any output, you could test that the generated machine code is identical. But then you have to ask, why isn't there a stable API rather than all this refactoring churn?

I guess this is just the way LLVM and Clang are designed -- all components really tightly coupled together. And it's a successful project so it must be working out for them. But...!


"If we're just talking about pure refactoring that doesn't change any output, you could test that the generated machine code is identical. But then you have to ask, why isn't there a stable API rather than all this refactoring churn? "

LLVM deliberately does not want a stable API. It wants people to keep up with trunk.

They do this because they saw what has happened with other compilers, where the stable API became literally impossible to change over time.

This is one of the reasons GCC still has a crappy backend. You either have to build a new API and port everyone over, or you have to find an incremental way to change an API interface with hundreds of random interface points.


That is definitely not true about needing the hardware. There are loads of compilers for fictitious hardware and they work perfectly well. There was an entire x86-64 tool chain before anyone ever manufactured one of those.


So I'd venture to presume that there's an extensive battery to tests for each target processor. It seems unlikely to me that developers on the LLVM/Clang team all have physical access to every target CPU.


I'm just explaining why Google might want it merged upstream. The work you're talking about has to be performed in either case.


In this case you would just need to compile to know it's broken.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: