OpenGL is fine, it has the same issues now it had before but none of it really comes from "old age" or being deprecated in any way. It's not as debuggable and much harder to get good performance out of than the lower level APIs but beyond that it's still great.
Honestly, starting out with OpenGL and moving to DX12 (which gets translated to Vulkan on Linux very reliably) is not a bad plan overall; DX12 is IMO a nicer and better API than Vulkan while still retaining the qualities that makes it an appropriate one once you actually want control.
Edit:
I would like to say that I really think one ought to use DSA (Direct State Access) and generally as modern of a OpenGL usage as one can, though. It's easy to get bamboozled into using older APIs because a lot of tutorials will do so, but you need to translate those things into modern modern OpenGL instead; trust me, it's worth it.
Actual modern OpenGL is not as overtly about global state as the older API so at the very least you're removing large clusters of bugs by using DSA.
I've found it has less idiosyncrasies, is slightly less tedious in general and provides a lot of the same control, so I don't really see much of an upside to using Vulkan. I don't love the stupid OO-ness of DX12 but I haven't found it to have much of an adverse effect on performance so I've just accepted it.
On top of that you can just use a much better shading language (HLSL) with DX12 by default without jumping through hoops. I did set up HLSL usage in Vulkan as well but I'm not in love with the idea of having to add decorators everywhere and using a 2nd class citizen (sort of) language to do things. The mapping from HLSL to Vulkan was also good enough but still just a mapping; it didn't always feel super straight forward.
(Edit: To spell it out properly, I initially used GLSL because I'm used to it from OpenGL and had previously written some Vulkan shaders, but the reason I didn't end up using GLSL is because it's just very, very bad in comparison to HLSL. I would maybe use some other language if everything else didn't seem so overwrought.)
I don't hate Vulkan, mind you, I just wouldn't recommend it over DX12 and I certainly just prefer using DX12. In the interest of having less translation going on for future applications/games I might switch to Vulkan, though, but still just write for Win32.
OpenGL still exists, runs and works fine on the two platforms that matter. I think its death has been overstated quite a bit.
With that said we decided to focus on DX12 eventually because it just made sense. I've written our platform layers targetting OpenGL, DX12, Vulkan and Metal and once you've just internalized all of these I really don't think the horribleness of the lower level APIs is as bad as people make them out to be. They're very debuggable, very clear and well supported.
For what it's worth my experience with Metal was that it was the closest any of the more modern APIs got to OpenGL. It's just stuck on an irrelevant OS. If they made sure you could use it on Windows & Linux I think it'd fill a pretty cool niche.
> WebGPU is in many ways closer to Metal than to Vulkan.
If only that were true for the resource binding model ;) WebGPU BindGroups are a 1:1 mapping to the Vulkan 1.0 binding model, and it's also WebGPU's biggest design wart. Even Vulkan is moving away from that overly rigid model, so we'll probably be stuck with a WebGPU that's more restrictive than required by any of its backend APIs :/
I'll check out WebGPU at some point, I guess. I've written our rendering layer in all of the major APIs (OpenGL, DX12, Vulkan and Metal) and found it very instructive to have all of them to compare at the same time because it really underscored the differences; especially maintaining all of them at the same time. We eventually decided to focus only on DX12, but I think I'll revive this "everything all at once" thing for some side projects.
As someone who has done this since DX7, what you’re looking for is WebGPU either Dawn (Google) or wgpu-native (Firefox). WebGPU works. It’s 99% there across platforms for use.
There’s another wrapper abstraction we all love and use called BGFX that is nice to work with. Slightly higher level than Vulkan or Metal but lower than OpenGL. Works on everything, consoles, fridges, phones, cars, desktops, digital signage.
My own engines have jumped back and forth between WebGPU and BGFX for the last few years.
Personally I'm not interested in the web as a platform. The APIs themselves I'm interested in, but as a target I think the web needs to die for everything that isn't a document.
I never mentioned the web as a target, rather devices. You don’t need a browser, you need a window or a surface to draw on and use C/C++/Rust/C# to write your code.
WebGPU is a standard, not necessarily for the web alone.
At no point does a browser ever enter the picture.
Well, it started off with “all the right intentions” of providing low-level access to the GPU for browsers to expose as an alternative to WebGL (and OpenGL ES like API’s of old).
However, throw a bunch of engineers in a room…
When wgpu got mature enough, they needed a way to expose the rust API for other needs. The C wrapper came. Then for testing and other needs, wgpu-native. I’m not a member of either team so I can’t say why for sure but because of those decisions, we have this powerful abstraction available pretty much on anything that can draw a web page. And since it’s just exposing the buffers and things that Vulkan, Metal, etc are already based on, it’s damned fast.
The added benefit is you get WGSL as your shading language which can translate into any and all the others.
The downsides are it provides NO WINDOW support as that needs to be provided by the platform, i.e. you. Good news is the tests and stuff use glfw and it’s the same setup to get Vulkan working as it is to get WebGPU working. Make window, probe it, make surface/swap chain, start your threads.
The WebGPU spec identifies squarely as a web standard: "WebGPU is an API that exposes the capabilities of GPU hardware for the Web." There are also no mentions of non-web applications.
The It's true that you can use Dawn and wgpu from native code but that's all outside the spec.
The intent and the application are never squarely joined. Yes it’s made for the web. However, it’s an API for graphics. If you need graphics, and you want to run anywhere that a web page could run, it’s a great choice.
If you want to roll your own abstraction over Vulkan, Metal, DX12, Legacy OpenGL, Legacy DX11, Mesa - be my guest.
You mentioned "Google" and Firefox, one of which is a browser. I clarified that I'm not interested in the web as a target, not to dismiss your entire suggestion but rather to clarify that that particular part doesn't interest me.
A lot of those Linux native builds will have been using Vulkan.
Parity between DX12 and Vulkan is pretty high and all around I trust the vkd3d[0] layer to be more robust than almost anything else in this process since they're such similar APIs.
The truth is that it's just a whole lot harder to make a game for Linux APIs and (even) base libraries than it is to make it for Windows, because you can't count on anything being there, let alone being a stable version.
Personally I don't see a future where Linux continues being as it is (a culture of shared libraries even when it doesn't make sense, no standard way of doing the basics, etc.) and we don't use translation layers.
We'll either have to translate via abstraction layers (or still be allowed to translate Win32 calls) to all of the nasty combination of libraries that can exist or we'll have to fix Linux and its culture. The second version is the only one where we get new games for Linux that work as well as they should. The first one undeniably works and is sort of fine, but a bit sad.
0 - vkd3d is the layer that specifically translates D3D12 to Vulkan, as opposed to vkdx which is for lower D3D versions.
It's not really harder to make a good native Linux port that will keep working, it's just not something most game developers have much experience with.
For what it's worth my experience with Windows 11 is that it's slower than Windows 10 for whatever reason, even though I'm doing exactly the same things in exactly the same ways, so it definitely echoes your assessment.
I personally think Windows has historically been the best OS for native development but I'm out. I've used Linux a ton before on/off since ~2003 but at this point it's looking more and more like there'll be no reason to ever install Windows again. I don't get who Windows 11 and all of these AI features is actually for but I know for a fact it's not for me.
Now I have to figure out how to actually get my Nvidia card to actually behave on Linux, or I'll just have to buy an AMD one again. Eventually I might actually start using the Steam Machine as a devbox; we'll see.
No need to include Elixir here; none of the important bits that will change how you view software come from Elixir, it's just a skin on top of Erlang (+ some standard library wrappers) and that's it.
I'd argue more people use Elixir over Erlang at this point. Sure its just an abstraction on top of Erlang, but people learn through Elixir nowadays, not through Erlang.
If you want to learn the actual mind changing aspects of the BEAM, clearly learning the simpler, smaller language with a more direct route to the juice is the way to go. Hence Erlang, not Elixir. I learned Elixir first back in 2015, and then learned Erlang, and have had the pleasure of using both in production. When all was said and done I really think Erlang was better, especially over a long enough time frame.
As a general point I'd like to state that I don't think it really matters what "people" do when you're learning for yourself. In the grand scheme of things approximately no one uses the BEAM, but this doesn't mean that learning how to use it is somehow pointless.
- Leaning on a pre-emptive scheduler to maintain order even in the presence of ridiculous amounts of threads ("processes" on the BEAM) running
- Using supervision trees to specify how and when processes and their dependents should be restarted
- Using `gen_server` processes as a standard template for how a thread should be running
There's more to mine from using the BEAM, but I think the above are some of the most important aspects. The first two I've never found to be fully replicated anywhere other than in OTP/BEAM. You don't need them, but once you're bought into the BEAM they're incredibly nice to have.
An LLM might be able to replace the majority of the code Sindre Sorhus has put out there, but it's probably a stretch to think that it could replace someone like John Carmack.
Trivial NPM libraries were never needed, but LLMs really are the nail in the coffin for them even when it comes to the most incompetent programmers because now they can literally just ask an LLM to spit out the exact same thing.
"Proper" is very subjective here. My entire workflow for developing 3D engines is covered a couple of times over with the announced Steam Machine specs. In fact, even when I was working in web backend development it would've covered it as a dedicated development machine and that was with some pretty pathological dev setups, language servers that ate up way too much memory, etc..
The Steam Machine looks to me like it'll become a great optimization target to hit (if it becomes popular enough, which it probably will). Solid, predictable targets are always great, and now we have yet another one that doesn't have the downside of being in some insular, exclusive dev space like PlayStation, Xbox or Nintendo. It's just a PC, in an open eco system, with predictable and decent hardware.
Honestly, starting out with OpenGL and moving to DX12 (which gets translated to Vulkan on Linux very reliably) is not a bad plan overall; DX12 is IMO a nicer and better API than Vulkan while still retaining the qualities that makes it an appropriate one once you actually want control.
Edit:
I would like to say that I really think one ought to use DSA (Direct State Access) and generally as modern of a OpenGL usage as one can, though. It's easy to get bamboozled into using older APIs because a lot of tutorials will do so, but you need to translate those things into modern modern OpenGL instead; trust me, it's worth it.
Actual modern OpenGL is not as overtly about global state as the older API so at the very least you're removing large clusters of bugs by using DSA.
reply