Hacker Newsnew | past | comments | ask | show | jobs | submit | more pornel's commentslogin

The author has one major issue with Rust in gamedev: they can't easily try out new game feature ideas or gameplay tweaks by writing quick and dirty code.

Such unfinished code can be obviously buggy and unsafe, but in game dev it matters more to have short feedback loop to try how things feel, than to have a perfectly correct code all the time.

Rust doesn't do quick and dirty, and will complain about code quality even when game devs don't even plan to keep the code.

This is a substantially different situation from other domains like application and services development, where it's easier to plan what you're going to implement, correctness is more important, and you don't need to try out 20 different JSON parsers to see which one has the most satisfying feel.


C++ doesn't do quick and dirty either. That's why experienced game developers combine a core engine written in a high-performance language (C++, Rust, C#), with a scripting language (Lua, Python, hand-rolled).

That approach gives you the best of both worlds: high-performance core with high-velocity iterations for gameplay. Don't use Rust or C++ for scripting... madness lies that way.


C++ will easily let you write non-thread-safe code anywhere, won't complain about multiple mutable pointers to the same object, will let you leave pointers dangling and resources leaked.

Such code is of course crappy, but the OP wants to test if things are fun before committing to implementing them properly. Rust wants things done properly on the first try (which usually is a good thing, except throw-away prototypes).

https://loglog.games/blog/leaving-rust-gamedev/


"C++ doesn't do quick and dirty either."

You can do very easily do quick and dirty in C++. (And shoot your foot off also..but that's a different thing)


It also introduces a lot of complexity where the two languages interact.


Except hot code reloading and interpreters is something that C++ devs can reach for, and Rust ones not, at least for the foreseable future.


What do you mean by this? Rust has multiple game engines that support hot reloading and/ or scripting.

https://fyrox-book.github.io/beginning/hot_reloading.html

https://github.com/jarkonik/bevy_scriptum


This the bare minimum, https://liveplusplus.tech/

Also supports is kind of relative,

"CHR is very new and experimental feature of the engine, it is based on wildly unsafe functionality which could result in memory corruption, subtle bugs, etc."


Why is that the bare minimum?

I also think that doc page is a little old. The feature is over 2 years old now and hot reloading is inherently "unsafe" in any compiled language. It's just letting you know that Rust's safety guarantees might go out the window if there are bugs in the code that handles hot-reloading.


Because that is what existing game studios expect, as shown by their customer list.

Also that was only one example, there are other similar ones.


Funnily enough, Mojang is on that list, and I remember recently watching a video[1] of Notch hot-reloading code while developing Minecraft, except it was with Java and I'm sure he didn't pay a dime for the ability to do that.

[1] - https://youtu.be/BES9EKK4Aw4


That live reload probably already works with Rust — they say they don't parse any source code, and instead analyze the compiled binary and debug info.


> Rust doesn't do quick and dirty, and will complain about code quality even when game devs don't even plan to keep the code.

The problem might just be an aversion to using quick and dirty solutions. Rust does a good job of making it feel wrong to write code that's not production ready, but there's nothing stopping you from using unsafe wherever you want.


Disclaimer: never actually shipped a game.

I've worked with Bevy, and there's it's incredibly easy to write "quick and dirty" to test stuff. I guess the major "downside" you need to account for is that the type system will try to prote you from crashes. Which can be a bit of a chore since youknow it won't crash with the values you've given it. But if you're comfortable with .unwrap and the occasional unsafe while prototyping, it's honestly fine (at least within Bevy).

Alternatively, they could try "scripting" behaviour first before implementing it in Rust, although from what I understand bevy's scripting support (I don't think it's explicitly supported, but bevy is very extensible) is still very early in development


ECS is like a whole another language on top of Rust, and it is dynamically typed.

If you add a scripting language and blueprints, you're shortening the feedback cycles by using Rust less.

I like Bevy, but it seems like Rust currently is much better suited for making game engines than the games themselves. Some "RustScript" is needed.


> ECS is like a whole another language on top of Rust, and it is dynamically typed.

Can you elaborate? Or did you mean dynamically dispatched?


1. You dynamically add components to any entity, so entities don't have a static type. Semantically they're comparable to a JavaScript Object where you can add and remove properties at will, rather than being structs or class instances with a closed set of fields.

2. Bevy's entity references are generational IDs, without static lifetime guarantees (and restrictions) of Rust's references.


> so entities don't have a static type.

They do, they're `Entity`, always.

> where you can add and remove properties at will

I don't think this is true for either `Component` or `Entity` in Bevy. You set what fields a `Component` has when writing the code, then it remains so during the runtime. There is no dynamically adding/removing fields at runtime, at least yet.

What you do tend to do in Bevy is adding/removing `Component`s to `Entity`s at runtime, is that maybe where the confusion comes in?


> They do, they're `Entity`, always

Yes, technically, but in how they're used - no. They're a dynamic type.

> What you do tend to do in Bevy is adding/removing `Component`s to `Entity`s at runtime

Yes, this is dynamic typing. Or, at least, one way to look at it - and what allows games to be iterated on quick.


> Yes, technically, but in how they're used - no. They're a dynamic type.

No, they're really not. Are you talking about Archetypes or something else? Because an Entity is just a ID + a generation, that's it. Nothing more and nothing less. You don't define them as more/less, nor do you use them are more/less either.

> Yes, this is dynamic typing. Or, at least, one way to look at it - and what allows games to be iterated on quick.

I guess a `Vec<u8>` is also then dynamic typing in your mind as you can add/remove elements to the vector? Doesn't really compute for me personally, but whatever floats your boat.


I think they used it more as an analogy: components fill a similar role as properties do in JavaScript.


But that's not really true either, is it? You don't query properties in JavaScript, at best you query for Objects of a certain shape/name. You don't encapsulate data in properties so one object has many different behavior based on those, you either split the Object into many different ones and have one parent, or you mix the behavior into the same Object.


Yeah I it's not right. Although, it's a decent early mental model for a JS dev learning ECS to adopt until better intuition develops!


Something wrong with Lua? ;)


H/2 doesn't solve blocking it on the TCP level, but it solved another kind of blocking on the protocol level by having multiplexing.

H/1 pipelining was unusable, so H/1 had to wait for a response before sending the next request, which added a ton of latency, and made server-side processing serial and latency-sensitive. The solution to this was to open a dozen separate H/1 connections, but that multiplied setup cost, and made congestion control worse across many connections.


> it solved another kind of blocking on the protocol level

Indeed! and it works well on low latency, low packet loss networks. On high packet loss networks, it performs worse than HTTP1.1. Moreover it gets increasingly worse the larger the page the request is serving.

We pointed this out at the time, but were told that we didn't understand the web.

> H/1 pipelining was unusable,

Yup, but think how easy it would be to create http1.2 with better spec for pipe-lining. (but then why not make changes to other bits as well, soon we get HTTP2!) But of course pipelining only really works in a low packet loss network, because you get head of line blocking.

> open a dozen separate H/1 connections, but that multiplied setup cost

Indeed, that SSL upgrade is a pain in the arse. But connections are cheap to keep open. So with persistent connections and pooling its possible to really nail down the latency.

Personally, I think the biggest problem with HTTP is that its a file access protocol, a state interchange protocol and an authentication system. I would tentatively suggest that we adopt websockets to do state (with some extra features like optional schema sharing {yes I know thats a bit of enanthema}) Make http4 a proper file sharing prototcol and have a third system for authentication token generation, sharing and validation.

However the real world says that'll never work. So connection pooling over TCP with quick start TLS would be my way forward.


> Personally, I think the biggest problem with HTTP is that its a file access protocol, a state interchange protocol and an authentication system.

HTTP is a state interchange protocol. It's not any of the other things you mention.


Ok, if you want to be pedantic:

"HTTP is being used as a file access, state interchange and authentication transport system"

Ideally we would split them out into a dedicated file access, generic state pipe (ie websockets) and some sort of well documented, easy to understand, implement and secure authentication mechanism (how hard can that be!?)

but to you point. HTTP was always mean to be stateless. You issue a GET request to find an object at a URI. That object was envisaged to be a file. (at least in HTTP 1.0 days) Only with the rise of CGI-bin in the middle 90s did that meaningfully change.

However I'm willing to bet that most of the traffic over HTTP is still files. Hence the assertion.


What?

HTTP is just a protocol. Stateful or stateless is orthogonal. HTTP is both and neither.

Also, HTTP has no concept of files (in general), only resources. Files can be resources! Resources are not files.


Why would chargers break compatibility? The protocols are open standards, and there's a huge install base for them in both cars and dispensers. Even Tesla's NACS adopted the ISO 15118 protocol, the same as in CCS cars in US and Europe, which allows use of dumb adapters to keep backwards compatibility.


Sorry, I should clarify that I'm not worried they actually will, just that that charging seems like the most vulnerable point in terms of relying on outside software.


Modern gas cars have all the same touchscreens, with the same software, and the same apps talking to the same servers.

EVs are equated with being computers on wheels, but that's just because barely any BEVs existed in the pre-software era, so there aren't many people who vow to never upgrade from their 1976 Sebring CitiCar.


Whenever WebP gives you file size savings bigger than 15%-20% compared to a JPEG, the savings are coming from quality degradation, not from improved compression. If you compress and optimize JPEG well, it shouldn't be far behind WebP.

You can always reduce file size of a JPEG by making a WebP that looks almost the same, but you can also do that by recompressing a JPEG to a JPEG that looks almost the same. That's just a property of all lossy codecs, and the fact that file size grows exponentially with quality, so people are always surprised how even tiny almost invisible quality degradation can change the file sizes substantially.


Productivity tanks only temporarily.

I know that writing your first 10 lines of Rust code will give you 15 novel compile errors, and that feels like it's impossible to get anything done in Rust, but once you get over the hump, you get to use a very productive modern language that can be as low-level as C, and almost as high-level as Python, at the same time.

You will not regret leaving behind the muscle memory for all the preprocessor tricks, workarounds for gratuitous platform differences and 40-year-old footguns, all the trivia for old compilers, the never ending tweaking of snowflake build scripts, and the skill of writing yet another half-assed hash table with your eyes closed. You'll wonder how you ever got anything done in a language that kept you busy calculating malloc sizes by hand.


Rust superficially looks like C++ to avoid looking weird to existing C/C++ programmers, but semantically it's quite far from C++. Rust's generics are not like C++ templates: they're only type based, and don't use syntax-based matching, don't have tricks like SFINAE. ODR is guaranteed by construction. Trait lookup is simpler: there's no overloading, no inheritance, no implicit conversions, and interaction with namespaces is simple (ambiguity is an error).

Phantom type looks alien if you haven't used it, but for what it does, it's actually a pretty simple. It's there to explain in Rust's terms what an opaque type or a foreign C/C++ type does. You just need to give Rust an example of an equivalent type, and you get a correct as-if behavior, and you don't need to even know that you've just configured type's variance and destruction order checking.


Rusts generics might wind up adopting constexpr like abilities and also on the docket are keyword generics.


TIOBE index is a horoscope for programmers — it's useless junk, and you can only use it to reinforce your pre-existing beliefs.

TIOBE has shown that beteen 2016 and 2017 the C language has dropped by 50%, and then in 2018 it miraculously doubled in usage.

What's more likely: that the slowest-moving most-conservative language just collapsed in one year, and later had a renaissance, or that TIOBE data is so shit that they have +/- 50% error margin? They're displaying percentages to two decimal places, but don't have a single digit of precision.

According to TIOBE, Visual Fox Pro is 2.5 times more popular than TypeScript. In case you haven't heard of Visual Fox Pro, it's a niche language for a database product that has been discontinued 16 years ago.

What's more likely: that Visual Fox Pro has still a massive userbase and almost nobody has heard of TypeScript, or that TIOBE's methodology is so stupid, that they can't even tell apart dead niche languages from the most popular ones?

If you must use some ranking, use RedMonk which is at least trying to have accurate data https://redmonk.com/sogrady/2024/03/08/language-rankings-1-2...


Cloudflare serves ~20% of the Web traffic, and several critical components on the request path are written in Rust (proxies, caches, firewalls, compression, some encryption, etc.)

At Cloudflare Rust is de-facto the default language for new network infrastructure projects. It is stable and mature. Teams using Rust are productive, and deliver fast software. Rust is here to stay, and there's no going back.

Cloudflare is hiring for positions that require Rust, but doesn't require "Rust programmers" specifically — other knowledge/experience is more important. Rust's safe/unsafe boundary and a lot of hand-holding by the compiler allows hiring people who are new to Rust. This is working fine, and ability to work on large real-world Rust projects is attracting a lot of applicants.


This has stopped working many years ago. Every top-level domain now has its own private cache of all other domains.

You likely have dozens of copies of Google Fonts, each in a separate silo, with absolutely zero reuse between websites.

This is because a global cache use to work like a cookie, and has been used for tracking.


Where "many years ago" is... 11 years ago for Safari! https://bugs.webkit.org/show_bug.cgi?id=110269


ah, I had forgotten about that, you're right.

well at least you don't have to download it more than once for the site, but first impressions matter yeah


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: