Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You may be underestimating the degree of difference in performance between Ruby and Rust.

Here's comparison of Ruby with JS, and Rust is of course faster still: https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

If the code runs 100 times faster, it might just offset even highly inefficient implementation.

> a LOT of programmers are happy writing a n^3

I have the same experience.

Unfortunately, and this is an issue I keep fighting with in some .NET communities, languages like C, C++ and Rust tend to select for engineers which are more likely to care about writing reasonably efficient implementation.

At the same time, higher-level languages sometimes can almost encourage the blindness to the real world model of computation, the execution implications be damned. In such languages you will encounter way more people who will write O(n^3) algorithm and will fight you tooth and nail to keep it that way because they have zero understanding of the fundamentals, wasting the heroic effort by the runtime/compiler to keep it running acceptably well.



> At the same time, higher-level languages sometimes can almost encourage the blindness to the real world model of computation, the execution implications be damned. In such languages you will encounter way more people who will write O(n^3) algorithm and will fight you tooth and nail to keep it that way because they have zero understanding of the fundamentals, wasting the heroic effort by the runtime/compiler to keep it running acceptably well.

I would say this tracks. I spent some time doing research on JVMs and largely found that, for example, the Java community largely values building OO abstractions around program logic and structuring things in ways that generally require more runtime logic and safety checks. For example, Java generics are erased and replaced with casts in the bytecode. Those checks the JVM has to blindly perform in the interpreter and any lower compiler tiers that don't inline. Only when you get to opt tiers does the compiler start to inline enough to see enough context to be able to statically eliminate these checks.

Of course Java hides these checks because they should never fail, so it's easy to forget they are there. As an API designer and as a budding library writer, Java programmers learn to use these abstractions, like the nicety of generics, in order to make things more general and usable. That's the higher priority, and when the decision criteria comes down to performance versus reuse, programmers choose reuse all the time.


> that generally require more runtime logic and safety checks.

These safety checks and runtime logic are a constant factor in the performance of a given java application.

Further, they are mostly miniscule compared to other things you are paying for by using java. The class check requires loading the object from main memory/cpu cache but the actual check is a single cycle cmp check. Considering the fact that that object will then be immediately used by the following code (hence warm in cache) the price really isn't comparable to the already existing overhead of reaching down into ram to fetch it.

I won't say there aren't algorithms that will suffer, particularly if you are doing really heavy data crunching that extra check can be somewhat murder. However, in the very grand scheme of things, it's nothing compared to all the memory loading that goes on in a typical java application.

That is to say, the extra class cast on an `ArrayList<Point>` is nothing compared to the cost of the memory lookups when you do

    int sum = 0;
    for (var point : points) {
      sum += point.x + point.y + point.z
    }


> The class check requires loading the object from main memory/cpu cache but the actual check is a single cycle cmp check.

Only a guard or, possibly, a final class type-check (at least it's the case for sealed classes or exact type comparisons in .NET). For anything else this will be more involved due to inheritance.

Obviously for any length above ~3 this won't dominate but JVM type system defaults don't make all this any easier.


I'm not an expert but I think that the compiler requires the exact class on insertion so at use it's just a check.


> If the code runs 100 times faster, it might just offset even highly inefficient implementation.

That's the danger of algorithmic complexity. 100 is a constant factor. As n grows, the effects of that constant factor are overwhelmed by the algorithmic inefficiency. For something like an n^3, it really doesn't take long before the algorithm dominates the performance over any language considerations.

To put it in perspective, if the rust n^3 algorithm is 100x faster with n=10 compared to the ruby O(n) algorithm, it takes only around n=50 before ruby ends up faster than rust.

For the most part, the runtime complexity of languages is a relatively fixed factor. That's why algorithmic complexity ends up being extremely important, more so than the language choice.

I used to not think this way, but the more I've dealt with performance tuning the more I've come to realize the wisdom of Big Oh in day to day programming. Too many devs will justify an O(n^2) algorithm as being "simple" even though the O(n) algorithm is often just adding a new hashtable to the mix.


I've found this website provides different results: https://programming-language-benchmarks.vercel.app/typescrip...

It also shows different Ruby implementations. I've tried truffleruby myself and it's blazing fast on long-running CPU-intensive tasks.


The tests on this website run for very little time indeed. They use input values that e.g. the original BehnchmarksGame suggests for validation before running for a longer time to get actual performance (another case in point - surely you want to run a web server longer than a couple hundred milliseconds). In my experience the data there does not always replicate to what you get in real world scenarios. It’s an unfortunate tradeoff because the benchmark runs when you want to support so many languages will take a very long time, but in my opinion it’s better to have numbers that are useful for making informed decisions over pure quantity.

If you have something specific in mind, it can be more interesting to build and measure the exact scenario you’d like to know about (standard caveats to benchmarking properly apply), which is quite easier if you have, say, just two languages.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: