Hacker Newsnew | past | comments | ask | show | jobs | submit | more jfkebwjsbx's commentslogin

What is the problem?

On macOS they can keep using the bundled linker.


Note that disabling debug info means way, way harder debugging!


As the sibling comment says, you can always turn it back on when you want to do debugging. Personally, I mainly do printf debugging so I don't miss it much. Also there is the option of only turning it off for dependencies like:

    [profile.dev.package."*"]
    debug = false
I haven't tested how big the impact of that is, but it should be pretty large for projects with a larger number of dependencies.

Edit: I've tested it now on cargo-udeps [1], which has a large crate graph but only little code of its own and there is only a small difference between turning off debuginfo for all crates vs only for dependencies, but a sizeable improvement over compiling in debuginfo of dependencies.

[1]: https://gist.github.com/est31/1bb6e7d6f4be2a23701b8c7c1c678b...


Oh sure, usually I want to build/run it way more often than debug it though. And if I do need to debug it, I could just turn it back on and rebuild.


> Rust compiles orders of magnitude faster than similar C++ code for initial compiles

Please avoid misleading statements like "orders of magnitude faster". If that were true, even super-complex projects would compile in a few seconds, given it would be at least 100x faster than compiling C++.

> For incremental compiles, Rust is infinitely faster.

Please avoid obvious exaggerations...

Incremental compiles of C++ take as low as modular is the code. Same for Rust. If you make huge TUs, you get huge incremental compile times.

> C++, even if you use Modules

Modules are not even used yet by any big project nor properly supported yet, and they are not expected to speed up compilation much anyway.

> Comparing Rust with C here is hard.

It isn't hard: C can be compiled way faster than either C++ or Rust because the code and type system is much simpler.

> As others have mentioned, with C, you typically only use libraries that are installed in your system

C code can be anything. C++ libraries are also included with the system.

> Rust does not have a global system cache for libraries

It does have a cache per user/project/etc., which is basically the same and without it the speed would be atrocious.


> Please avoid misleading statements like "orders of magnitude faster". If that were true, even super-complex projects would compile in a few seconds, given it would be at least 100x faster than compiling C++.

The tests of my Rust range-v3 like library compile in ~12 seconds. The tests of C++'s range-v3 take ~320 seconds to compile.

> Please avoid obvious exaggerations...

The incremental compile of the same library takes in Rust ~1 second. In C++, it still takes ~320 seconds, zero improvement because C++ header-only generic libraries do not benefit from incremental compilation.

> Modules are not even used yet by any big project nor properly supported yet, and they are not expected to speed up compilation much anyway.

I maintain a ~500kLOC C++ project that uses modules. C++ Modules did not speed up compilation at all over simple PCHs.

> It does have a cache per user/project/etc., which is basically the same and without it the speed would be atrocious.

No it does not. Check out any project from crates.io into a subdirectory, do a cargo build. Check it out into another sub-director, do a cargo build. All dependencies are re-built.

You'd need to go way out of your way and set up like sscache to get something similar, but Rust and cargo do not do this.

> It isn't hard: C can be compiled way faster than either C++ or Rust because the code and type system is much simpler.

Citation needed. On my machine, rustc spends ~80% of its time in LLVM, which is slow, because the translation units are huge. The other 20% is split into ~5% parsin, ~5% type checking, and ~10% LLVM-IR generation.

Even rustc's synthetic trait metaprogramming benchmarks do not spend even 10% of the compile-time in type checking.

So if you have a test for which rustc spends most of its compile-time in type checking, go ahead, and post a link. We'd love to add it to rustc's benchmark suite.

Also, on my machine, while clang spends less time in code generation, it is still over ~60%.

Clang spends more time in parsing and type checking, and that's because C++ overload resolution scales with the number of visible overloads, and the more header files you have, the more overloads there are (e.g. an error with operator<< on a ~500kLOC project produces ~2000 "potential candidates"). While that's part of the slowdown, still 60% of the time is spent in LLVM. The reason is that idiomatic C++ just produces more code than C due to templates, inlining, etc. for similarly-sized TUs.

Parallelizing compilation within a TU is hard to do well. Rustc tries without much success and clang does not. In C, you can simply parallelize by using more compiler processes to compile fully independent TUs in parallel. These TUs are also smaller because idiomatic C code doesn't use many macros to generate code, most functions are linked opaquely, generics also generate few code via void*, and the worst culprit of long compilation times in C, which is including too many header files, can properly be cached using PCHs.


This type of comment is the worst. What did you add to the conversation besides a bunch of useless "well ackshually"? If you want to respond to a comment, respond to it in full, not only conveniently out of context sentences.


The comment makes good points that need to be considered and helps mitigate the unrealistic exaggerations of the parent comment.

This does not fit the “well akshually” trope at all.


> Stars tell how cool a open source project is

Strange measure of coolness.


Git can be used locally!


> Wifi doesn't seem to have this latency problem.

Wifi is one of the best things you can do to add unreliability and latency.


It depends on what you want to do. It is not about picking one or another.

For the majority of software (I guess you are referring to web apps), simpler/GC/scripting languages like TypeScript or Go are better.

But, if you really need something low-level, then you need C, C++, Rust, Zig, etc. and there is no way around it.


+1

I don't understand people trying to make "a language that does it all". There has been a lot of that in Go, in Rust, in JavaScript, in Java, in TypeScript, in C++, in Haskell, in D...


I guess browser support is not good enough. They just dropped IE11, after all.

Probably that will be the change for Bootstrap 6.


React is meant to simplify application development, specially suited for single-page apps (SPAs).

For a normal/small website, there is no need for a framework like that which adds unneeded complexity.


I don't know. Being neither a web developer nor a frontend specialist (my professional development days are long behind me and were mostly spent on real time systems in Ada with some UI work using Qt), I recently developed a small site using react and redux. I chose the stack for fun but I found the experience a lot nicer than all my previous forays into web development. It felt a lot cleaner than what I was expecting. I think the stack pushes you in a clear direction regarding architecture and to me it seemed to make things simpler rather than more complicated.


That’s exactly why many of us use it, without giving much thought to the idea that it’s heavy or slow (which really makes me wonder what the hell some people are doing with their React apps...)

The developer ergonomics, and the mental models which it prescribes - which are why I’d argue it’s actually a framework and not just a library, but that’s semantics - are really well thought out and can make you very productive.

If I want to build a static site with minimal fuss I might still use something like HTML, Sass and vanilla JS with Parcel. For pretty much everything else React has become a default.


React is also well-suited for a single highly-interactive module on an otherwise non-React website. Of course, you still need to consider JS code size and loading performance, but it would often be completely reasonable to use React for, say, a very complex interactive form or table of data on your otherwise static Rails website.


Something like Preact can be pretty good for this too. The one thing that makes React (by itself, not with all the stupid extras) good for this stuff (as opposed to Vue and Angular) is that it has a very simple API without much surface area, so it's relatively easy to maintain, just like jQuery. But where it still falls a bit short is the actual size of the library itself, it's a bit wasteful to package in React just for a widget or something.


I don't agree at all. React is just as much a framework as a "way of thinking". I use react for _everything_ - because it's a great way to whip out a website - large or small. The "complexity" added is non-existent, unless it's because you are not willing to learn the way to do things. That's not the paradigm's fault, that's just people being complacent and lazy - sticking with "what they know" (which is fine, it's human nature).

Throwing together an API in .NET Core and a website consuming that API in React takes less than 5 minutes to get going.


Did you ever have to optimize for SEA/SEO/Page speed?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: