This didn't need Microsoft's teeth to fail. There isn't a single "Linux" that game devs can build for. The kernel ABI isn't sufficient to run games, and Linux doesn't have any other stable ABI. The APIs are fragmented across distros, and the ABIs get broken regularly.
The reality is that for applications with visuals better than vt100, the Win32+DirectX ABI is more stable and portable across Linux distros than anything else that Linux distros offer.
I would like CPUs to move to the GPU model, because in the CPU land adoption of wider SIMD instructions (without manual dispatch/multiversioning faff) takes over a decade, while in the GPU land it's a driver update.
To be clear, I'm talking about the PTX -> SASS compilation (which is something like LLVM bitcode to x86-64 microcode compilation). The fragmented and messy high-level shader language compilers are a different thing, in the higher abstraction layers.
I don't know what you're referring to. Rust's threads are OS threads. There's no magic runtime there.
The same memory corruption gotchas caused by threads exist, regardless of whether there is a borrow checker or not.
Rust makes it easier to work with non-trivial multi-threaded code thanks to giving robust guarantees at compile time, even across 3rd party dependencies, even if dynamic callbacks are used.
Appeasing the borrow checker is much easier than dealing with heisenbugs. Type system compile-time errors are a thing you can immediately see and fix before problems happen.
OTOH some racing use-after-free or memory corruption can be a massive pain to debug, especially when it may not be possible to produce in a debugger due to timing, or hard to catch when it happens when the corruption "only" mangles the data instead of crashing the program.
It's not the runtime; it's how the borrow-checker interoperates with threads.
This is an aesthetics argument more than anything else, but I don't think the type theory around threads and memory safety in Rust is as "cooked" as single-thread borrow checking. The type assertions necessary around threads just get verbose and weird. I expect with more time (and maybe a new paradigm after we've all had more time to use Rust) this is a solvable problem, but I personally shy away from Rust for multi-threaded applications because I don't want to please the type-checker.
You know that Rust supports scoped threads? For the borrow checker, they behave like same-thread closures.
Borrow checking is orthogonal to threads.
You may be referring to the difficulty satisfying the 'static liftime (i.e. temporary references are not allowed when spawning a thread that may live for an arbitrarily long time).
If you just spawn an independent thread, there's no guarantee that your code will reach join(), so there's no guarantee that references won't be dangling. The scoped threads API catches panics and ensures the thread will always finish before references given to it expire.
I'll have to look more closely at scoped threads. What I'm referring to is that compared to the relatively simple syntax of declaring scopes for arguments to functions and return values to functions, the syntax when threads get involved is (to take an example from the Rust Book, Chapter 21):
But really, that first type signature is not very complex. It can get far, far, far worse. That’s just what happens when you encode things in types.
(It reads as “spawn is a function that accepts a closure that returns a type T. It returns a JoinHandle that also wraps a T. Both the closure and the T must be able to be sent to another thread and have a static lifetime.”)
With Tesla it's all-or-nothing, and when it inevitably drives poorly, I can only turn it off. It physically resists me turning the steering wheel while it's driving, and overcoming the resistance results in an unpleasant and potentially dangerous jerk.
OTOH in IONIQ I can control lane assist and adaptive cruise control separately. The lane assist is additive to normal steering. It doesn't take over, only makes the car seem to naturally roll along the road.
Go's goroutines aren't plain C threads (blocking syscalls are magically made async), and Go's stack isn't a normal C stack (it's tiny and grown dynamically).
A C function won't know how to behave in Go's runtime environment, so to call a C function Go needs make itself look more like a C program, call the C function, and then restore its magic state.
Other languages like C++, Rust, and Swift are similar enough to C that they can just call C functions directly. CPython is a C program, so it can too. Golang was brave enough to do fundamental things its own way, which isn't quite C-compatible.
Go (gc) was also a C program originally. It still had the same overhead back then as it does now. The implementation language is immaterial. How things are implemented is what is significant. Go (tinygo), being a different implementation, can call C functions as fast as C can.
> ...so it can too.
In my experience, the C FFI overhead in CPython is significantly higher than Go (gc). How are you managing to avoid it?
I think in case of CPython it's just Python being slow to do anything. There are costs of the interpreter, GIL, and conversion between Python's objects and low-level data representation, but the FFI boundary itself is just a trivial function call.
> but the FFI boundary itself is just a trivial function call.
Which no different than Go, or any other language under the sun. There is no way to call a C function other than trivially, as you put it. The overhead in both Python and Go is in doing all the things you have to do in order to get to that point.
A small handful of languages/implementations are designed to be like C so that they don't have to do all that preparation in order to call a C function. The earlier comment included CPython in them. But the question questioned how that is being pull off, as that isn't the default. By default, CPython carries tremendous overhead to call a C function — way more than Go.
I wonder if they should be using something like libuv to handle this. Instead of flipping state back and forth, create a playground for the C code that looks more like what it expects.
Java FFI is slow and cumbersome, even more so if you're using the fancy auto-async from recent versions. The JVM community has mostly bitten the bullet and rewritten the entire world in Java rather than using native libraries, you only see JNI calls for niche things like high performance linear algebra; IMO that was the right tradeoff but it's also often seen as e.g. the reason why Java GUIs on the desktop suck.
Other languages generally fall into either camp of having a C-like stack and thread model and easy FFI (e.g. Ruby, TCL, OCaml) and maybe having futures/async but not in an invisible/magic way, or having a radically different threading model at the cost of FFI being slow and painful (e.g. Erlang). JavaScript is kind of special in having C-like stack but being built around calling async functions from a global event loop, so it's technically the first but feels more like the second.
JNI is the second or maybe third FFI for Java. JRI existed before it and that was worse, including performance. The debugging and instrumentation interfaces have been rewritten more times.
C# does marshal/unmarshal for you, with a certain amount of GC-pinning required for structures while the function is executing. It's pretty convenient, although not frictionless, and I wouldn't like to say how fast it is.
Tracking in external system adds overhead not only for filing the issue, but also for triaging it, backlog management, re-triaging to see if it's still a problem, and then closing it when it's finished. Issues in an external systems may also be overlooked by developers working on this particular code.
There are plenty of small things that are worth fixing, but not worth as much as the overhead of tracking them.
TODO in code is easy to spot when someone is working on this code, and easy to delete when the code is refactored.
I think the key distinction is that tracking in an external system exposes the task for triage/management/prioritization to people who aren't reading the code, while a TODO comment often leaves the message in exactly the spot where a programmer would read it if the possibility of a problem became an actual problem that they had to debug.
In my experience, these can often be replaced with logger.Error("the todo") or throw new Exception("the todo"), which read about as well as //TODO: the todo, but also can bubble up to people not actually reading the code. Sometimes, though, there's no simple test to trigger that line of code, and it just needs to be a comment.
I've also seen good use of automation tools that monitor the codebase for TODOs and if they last for more than a couple weeks escalate them into a "real" ticketing system.
(I've also seen that backfire and used to punish engineering.)
SonarQube, for instance, will flag TODOs as "Code Smells" and then has some automation capabilities to eventually escalate those to tickets in certain systems if plugins are configured.
I've also seen people do simpler things with GitHub Actions to auto-create GitHub Issues.
> I've also seen good use of automation tools that monitor the codebase for TODOs and if they last for more than a couple weeks escalate them into a "real" ticketing system
Im sorry but that’s exactly the kind of automation that sounds helpful in theory but ends up creating bloat and inefficiency in practice. Like the article says, the second a TODO gets a timer/deadline attached, it stops being a quick, lightweight note and turns into process overhead (note the distinction between something that is urgent and needs fixing now, and something that really is just a TODO).
Maybe a weird way to put it, but it’s like a TODO that used to be lean and trail-ready - able to carry itself for miles over tough terrain with just some snacks and water - suddenly steps on a scale and gets labeled “overweight" and "bloated" and flagged as a problem, and sent into the healthcare system. It loses its agility and becomes a burden.
"But the TODO is a serious problem that does need to get addressed now" Ok then it was never actually a TODO, and thats something to take up with the dev who wrote it. But most TODOs are actually just TODOs - not broken code, but helpful crumbs left by diligent, benevolent devs. And if you start to attack/accuse every TODO as "undone work that needed to be done yesterday" then youll just create a culture where devs are afraid to write them, which is really stupid and will just create even more inefficiency/pitfalls down the road - way more than if you had just accepted TODOs as natural occurrences in codebases
> Tracking in external system adds overhead not only for filing the issue, but also for triaging it, backlog management, re-triaging to see if it's still a problem, and then closing it when it's finished.
Which is already what you're doing in that system, and what the system is designed for.
Source code is not designed to track and management issues and make sure they get prioritized, so you shouldn't be using your source code to do this.
We add TODOs during development, and then during review we either add a ticket and remove the TODO, or fix the issue as part of the PR and remove the TODO.
> Which is already what you're doing in that system, and what the system is designed for.
No it isn't. The system is designed to get managers to pay for it and it does that very well, it's very ineffective at tracking or triaging issues.
> Source code is not designed to track and management issues and make sure they get prioritized, so you shouldn't be using your source code to do this.
Most things that people build systems for managing outside of the source code repo end up being less effective to manage that way.
This is just intellectually lazy IMO. "Ticket management software isn't good at managing tickets, it's just good at getting stupid CTOs to pay for it because execs are stupid didn't you guys know?"
I'm sure that's true for enterprise bloatware, but there are dozens of excellent open source and/or low cost issue trackers. Hell, Trello will do about 90% of what you need out of the box if you're operating with 1-3 people.
> This is just intellectually lazy IMO. "Ticket management software isn't good at managing tickets, it's just good at getting stupid CTOs to pay for it because execs are stupid didn't you guys know?"
Far more intellectually lazy to assume that because people pay for it it does something useful. Have you actually tried working without a ticketing system, not just throwing up your hands as soon as anything went wrong but making a serious attempt?
The main problem I have with ticketing (and project management) systems is that I can't get the people asking me to do things to use the system. I'll set it up and show them how to use it, and then they tell me about issues via email or text message or voice call. I end up entering the tickets/tasks myself, at which point I might as well be using my own org-mode setup.
From all I've seen, heard, and read about that problem over the decades (yup, I think it's not mere years any more):
The only solution is to be rock-solid in refusing to do anything if there isn't a ticket for it. Your nine-thousand-percent-consistent reply to those emails, text messages, and voice calls needs to be "Yeah, make a ticket about it. I've shown you how, and that's the way we do it. No ticket, no action from me."
If you can't be that "mean" about it, you'll have to be a make-my-own-tickets doormat forever. In that perspective, doesn't feel all that "mean" any more, does it?
Where I'm at, we have a bot that automatically creates a ticket for IT any time someone posts a message to the #it-help channel on Slack. It even automatically routes the ticket based on the content of the message with decent accuracy.
Perhaps I've been spoiled having worked almost exclusively in organizations where it's completely acceptable to get a message on Slack, or Teams, or email, or whatever, with some bug or issue, and respond with "please create a ticket" and the person... creates a ticket.
Yeah if nobody uses the system, or if you have to expend organizational capital to get them to do it (they view it as doing something for you instead of just doing their job), the system will by definitely be worth less and be less helpful.
Has anyone ever made a language or extended a language with serious issue tracking? I can definitely imagine a system where every folder and file can have a paired ticket file where past, future and current tasks are tracked. Theoretically, it could even bind to a source management extension for the past ones. It won't ever be as powerful and manager-friendly as JIRA, but it would be good enough for many projects.
There are various things built on git (the issues don't need representation in the current state of the source necessarily after all) but I'm not aware of any with any traction - it's a hobby Show HN thing, it appeals to us, but not to product.
I think it'd be cooler to have it as part of the source and kind of build incrementally. So you'd have the bits in code that get added to the pair file that will then be added to the directory file... Then you can add other pairs for things like test results, and it could be decent. Some Lego Logseqesue thing :)
I wouldn't use git as a basis for it since then management is completely out. Hell, I'm probably out as well since I see git as a necessary evil.
> Source code is not designed to track and management issues and make sure they get prioritized, so you shouldn't be using your source code to do this.
Indeed. Who in their right mind would think it is reasonable to track relevant tasks purposely outside of a system designed and used explicitly to get tasks done?
Also, no one prevents a developer from closing a ticket before triaging it. If you fix a TODO, just post a comment and close it. I mean, will your manager complain about effortlessly clearing the backlog? Come on.
You can leave the TODO in the comments- e.g. ruff the linter has an optional rule to disallow TODO comments unless it's followed by an issue url.
If you put that in the CI, then you can use TODOs either as blockers you wish to fix before merging, or as long term comments to be fixed in a future ticket.
Some years ago, I started to use FIXME to indicate that something is blocking the PR and needs to be done before merging, and TODO if something can be done at a later point in time. Then, CI only needs to grep for FIXME to block merging the PR, which works for practically any language. Works pretty well for me, maybe that tip can help others as well.
> Tracking in external system adds overhead not only for filing the issue, but also for triaging it, backlog management, re-triaging to see if it's still a problem, and then closing it when it's finished.
Filing the issue can take as long as writing the TODO message.
Triaging it, backlog management, re-triaging to see if it's still a problem... It's called working on the issue. I mean, do you plan on working on a TODO without knowing if it is still a problem? Come on.
> Issues in an external systems may also be overlooked by developers working on this particular code.
I stumbled upon TODO entries that were over a decade old. TODOs in the code are designed to be overlooked.
The external system was adopted and was purposely designed to help developers track issues, including bugs.
You are also somehow assuming that there is no overhead in committing TODO messages. I mean, you need to post and review a PR to update a TODO message? How nuts is that.
> There are plenty of small things that are worth fixing, but not worth as much as the overhead of tracking them.
If those small things are worth fixing, they are worth filing a ticket.
If something you perceive as an issue is not worth the trouble of tracking, it's also not worth creating a comment to track it.
This gives me an idea for a source control/task task tracking system where TODOs in code get automatically turned into tickets in your tracker, and then removed from your code automatically.
That way you don't fill your code with a list of TODOs and you'll still be able to track what you want to improve in your codebase.
It might not be the right tool for everyone, but I'd love it.
There are some artificial limitations, but I love the upside: I don't need defensive programming!
When my function gets an exclusive reference to an object, I know for sure that it won't be touched by the caller while I use it, but I can still mutate it freely. I never need to make deep copies of inputs defensively just in case the caller tries to keep a reference to somewhere in the object they've passed to my function.
And conversely, as a user of libraries, I can look at an API of any function and know whether it will only temporarily look at its arguments (and I can then modify or destroy them without consequences), or whether it keeps them, or whether they're shared between the caller and the callee.
All of this is especially important in multi-threaded code where a function holding on to a reference for too long, or mutating something unexpectedly, can cause painful-to-debug bugs. Once you know the limitations of the borrow checker, and how to work with or around them, it's not that hard. Dealing with a picky compiler is IMHO still preferable to dealing with mysterious bugs from unexpectedly-mutated state.
In a way, borrow checker also makes interfaces simpler. The rules may be restrictive, but the same rules apply to everything everywhere. I can learn them once, and then know what to expect from every API using references. There are no exceptions in libraries that try to be clever. There are no exceptions for single-threaded programs. There are no exceptions for DLLs. There are no exceptions for programs built with -fpointers-go-sideways. It may be tricky like a game of chess, but I only need to consider the rules of the game, and not odd stuff like whether my opponent glued pieces to the chessboard.
Yes! One of the worst bugs to debug in my entire career boiled down to a piece of Java mutating a HashSet that it received from another component. That other component had independently made the decision to cache these HashSet instances. Boom! Spooky failure scenarios where requests only start to fail if you previously made an unrelated request that happened to mutate the cached object.
This is an example where ownership semantics would have prevented that bug. (references to the cached HashSets could have only been handed out as shared/immutable references; the mutation of the cached HashSet could not have happened).
The ownership model is about much more than just memory safety. This is why I tell people: spending a weekend to learn rust will make you a better programmer in any language (because you will start thinking about proper ownership even in GC-ed languages).
Yeah that's definitely optimistic. More like 1-6 months depending on how intensively you learn. It's still worth it though. It easily takes as long to learn C++ and nobody talks about how that is too much.
Yes. I learned Rust in a weekend. Basic Rust isn't that complicated, especially when you listen to the compiler's error messages (which are 42x as helpful compared with C++ compiler errors).
Damn. You are a smart person. It’s taken me months and I’m still not confident. But I was coming from interpreted languages (+ small experience with c).
> This is an example where ownership semantics would have prevented that bug.
It’s also a bug prevented by basic good practices in Java. You can’t cache copies of mutable data and you can’t mutate shared data. Yes it’s a shame that Java won’t help you do that but I honestly never see mistakes like this except in code review for very junior developers.
The whole point is that languages like Java won't keep track of what's "shared" or "mutable" for you. And no, it doesn't just trip up "very junior developers in code review", quite the opposite. It typically comes up as surprising cross-module interactions in evolving code bases, that no "code review" process can feasibly catch.
Speak for yourself. I haven't seen any bug like this in Java for years. You think you know better and my experience is not valid? Ha. Ok. Keep living in your dreams.
> When my function gets an exclusive reference to an object, I know for sure that it won't be touched by the caller while I use it, but I can still mutate it freely.
I love how this very real problem can be solved in two ways:
1. Avoid non-exclusive mutable references to objects
2. Avoid mutable objects
Former approach results in pervasive complexity and rigidity (Rust), latter results in pervasive simplicity and flexibility (Clojure).
Shared mutable state is the root of all evil, and it can be solved either by completely banning sharing (actors) or by banning mutation (functional), but Rust gives fine-grained control that lets you choose on case-by-case basis, without completely giving up either one. In Rust, immutability is not a property of an object in Rust, but a mode of access.
It's also silly to blame Rust for not having flexibility of a high-level GC-heavy VM-based language. Rust deliberately focuses on the extreme opposite of that: low-level high-performance systems programming niche, where Clojure isn't an option.
There may be lots of uninformed post-hoc rationalizations now, but it couldn't have started with everyone collectively deciding to irrationally dislike Ada, and not even try it. I suspect it's not even the ignorant slander that is the cause of Ada's unpopularity.
Other languages survive being called designed by committee or having ugly syntax. People talk shit about C++ all the time. PHP is still alive despite getting so much hate. However, there are rational reasons why these languages are used, they're just more complicated than beauty of the language itself, and are due to complex market forces, ecosystems, unique capabilities, etc.
I'm not qualified to answer why Ada isn't more popular, but an explanation implying there was nothing wrong with it, only everyone out of the blue decided to irrationally dislike it, seems shallow to me.
The related "Belgium is unsafe for CVD" post explains that if you discover any vulnerability in anything in Belgium, it automatically creates a legal obligation on you, with a 24h deadline, to report this secretly and exclusively to Belgian authorities, with logs of everything you've done, even if you're not a Belgian citizen and don't reside in Belgium.
This is a very short deadline, with onerous requirements. They most likely won't give you permission to share any information about this vulnerability with anyone else. If it's a common vulnerability affecting non-Belgian entities, you'll be required to leave them uninformed and vulnerable.
The most rational response for law-abiding vulnerability researches is to stay away from everything Belgian and never report anything to them.
Sharing one SQLite connection across the process would necessarily serialize all writes from the process. It won't do anything for contention with external processes, the writes within the process wouldn't be concurrent any more.
Basically, it adds its own write lock outside of SQLite, because the pool can implement the lock in a less annoying way.
SQLite's lock is blocking, with a timeout that aborts the transaction. An async runtime can have a non-blocking lock that allows other tasks to proceed in the meantime, and is able to wait indefinitely without breaking transactions.
What's the benefit of this over just doing PRAGMA busy_timeout = 0; to make it non-blocking ?
After all, as far as i understand, the busy timeout is only going to occur at the beginning of a write transaction, so its not like you have to redo a bunch of queries.
The reality is that for applications with visuals better than vt100, the Win32+DirectX ABI is more stable and portable across Linux distros than anything else that Linux distros offer.