For such an approach I’ve always thought it seemed a risk to launch something like that into the air. E.g. what happens if the rocket explodes while taking off? Or something bad happens when in space? Will it rain nuclear material?
Obviously you just put the nuclear material inside of the in-flight data recorder so it will survive a rocket failure.
If you're being serious, Cassini had those kinds of questions with its launch about its RTG but that didn't have enough nuclear material for it to be a problem.
If we were to try and use a fusion reaction in space, we'd probably use the existing one.
Which is a great idea if we ignore all other issues in academia, e.g. pressure to publish etc. Taking such a hard-line stance I fear will just yield much less science being done.
Puppy linux was my first linux install back when I was around 13 years old, and I remember it fondly. I recall I even found it easier to get up and running than ubuntu. (Which is somewhat puzzling in hindsight?)
Agreed. This is exactly why I built Shell Bling Ubuntu https://github.com/hiAndrewQuinn/shell-bling-ubuntu , and test it every 6 months or so against a fresh install of Ubuntu in a VM - I don't want to have to remember all the little trivia I have to use to get all of my little command line tools, neovim setup, etc. working together harmoniously. The best part is I can hand it off to my colleagues and blow their minds with just how nice a terminal experience can be, "out of the box", kinda.
Is my understanding correct that this would provide version agnostic python bindings? Currently, I am building a version of my bindings separately for each version (e.g. building and linking with python 3.7, 3.8, etc.). While automated, it still makes CI/CD take quite a long time.
As others have said, this has been supported since the limited/stable APIs were introduced. What this adds is a way of implementing a Python extension that can be loaded in (not just compiled for, which is already an improvement!) different Python implementations, namely CPython, Pypy and GraalVM.
But it is very limited. Understandably so, as they don't want to ossify the internal APIs, but it still is so limited that you can't actually build anything just using just that API as far as I know.
Woah! Okay, that's very cool. I thought it was much more limited than that (for the stable abi). Awesome!
It seems like they use mostly normal python as a bridge with the rust codebase. So from what I've seen on their repo, they mostly do not use any CPython APIs (a part from a few wrappers I think). Which makes sense!
That makes sense: I would assume polars mostly converts from Python to rust at the edge then it works in rust internally.
Though I’ve not really looked at the details I’d assume most of the missing stuff would be “intimate” APIs of builtin types. And all the macros leveraging implementation details.
I wouldn't recommend ccache (or sccache) in CI unless you really need it. They are not 100% reliable, and any time you save from caching will be more than lost debugging the weird failures you get when they go wrong.
You can't cache based on the file contents alone. You will also need to cache based on all OS/compiler queries/variables/settings that the preprocessor depends on, since the header files might generate completely different content based on what ifdef gets triggered.
And that’s not impossible, just tedious. One tricky (and often unimportant) part is negative dependencies—when the build depends on the fact that a header or library cannot be found in a particular directory on a search path (which happens all the time, if you think about it). As far as I know, no compilers will cooperate with you on this, so build systems that try to get this right have to trace the compiler’s system calls to be sure (Tup does something like this) or completely control and hash absolutely everything that the compiler could possibly see (Nix and IIUC Bazel).
It’s not about that, that’s not relevant to ccache at all. (And yes, C23 does have __has_include, though not a lot of compilers have C23 yet.) It’s about having potentially conflicting headers in the source file’s directory, in your -I directories, and in your /usr/include directories.
Suppose a previous compile correctly resolved <libfoo.h> to /usr/include/libfoo.h, and that file remains unchanged, but since that time you’ve installed a private build of libfoo such that a new compile would instead resolve that to ~/.local/include/libfoo.h. What you want is to record not just that your compile opened /usr/include/libfoo.h (“positive dependencies” you get with -MD et al.), but that it tried $GITHOME/include/libfoo.h, ~/.local/include/libfoo.h, etc. before that and failed (“negative dependencies”), so that if any of those appear later you can force a recompile.
Oh yeah that can cause lots of weird problems. I've run into that sort of issue a lot when cross-compiling, cause often then you might have a system copy of a library and a different version for the target, that can be a real pain.
please read the documentation before dispensing uninformed advice like this -- it works using the output of the preprocessor and optionally, file paths
Why are you so skeptical? Think about how it works and then you'll understand that cache invalidation bugs are completely inevitable. Hell, cache invalidation is notoriously difficult to get right even when you aren't building it on top of a complex tool that was never designed for aggressive caching.
The fact that hallucinates doesn’t make it useless for everything, but it does limit its scope. Respectfully, I think you haven’t applied it to the right problems if this is your perspective.
In some ways, its like saying the internet is useless because we already have the library and “anyone can just post anything on the internet”. The counter to this could be that an experienced user can sift through bullshit found on websites.
A argument can be made for LLMs; as such, they are a learnable tool. Sure it wont write valid moon lander code, but it can teach you how to get up and running with a new library.
To me the discussion here reads a little like: “Hah. See? It cant do everything!”. It makes me wonder if the goal is to convince each other that: yes, indeed, humans are not yet replaced.
It’s next token regression, of course it can’t truely introspect. That being said LLMs are amazing tools and o1 is yet another incremental improvement and I welcome it!
Its not always the right tool depending on the task. IMO using LLMs is also a skill, much like learning how to Google stuff.
E.g. apparently C# generics isn’t something its good at. Interesting, so don’t use it for that, apparently its the wrong tool. In contrast, its amazing at C++ generics, and thus speeds up my productivity. So do use it for that!
> I think it could really empower indie devs for instance.
I think indie devs are already pretty empowered with the number of small game engines, etc., given the quantity - and quality - of stuff they're putting out (just look at itch.io).
I’m not sure I understand. Its not required, as is evidenced by all the amazing indie games we have already pre AI. But if it helps, why not? Maybe this way there can be even more great games.