Never use someone else's synthetic key as your primary key. If you want ordered keys, even if the API is giving out sequential integers, you should still use your own sequential IDs.
As written, it is UB, yes, but certainly in C++, and, I think, also in C, using a union is undefined behavior, too. I think (assuming isn’t and float to be of the same size) the main risk is that, if you do
union {
float f;
int i;
} foo;
foo.f = 3.14;
printf(“%x”, foo.i);
that the compiler can think the assignment to foo.f isn’t used anywhere, and thus can chose not to do it.
In C++, you have to use memmove (compilers can and often do recognize that idiom)
Pure js without typescript also has "types". Typescript doesn't give you nominal types either. It's only structural. So when you say that you "know it's already been processed", you just have a mental type of "Parsed" vs "Raw". With a type system, it's like you have a partner dedicated to tracking that. But without that, it doesn't mean you aren't doing any parsing or type tracking of your own.
Why does "true" parsing have to error out on the very first problem? It is more than possible (though maybe not easy) to keep parsing and collecting errors as they appear. Zod, as the given example in the post, does it.
Before parsing, the argument array contains both the flags to enable and disable the option. Validation would either throw an error or accept it as either enabled or disabled. But importantly, it wouldn't change the arguments. If the assumption is that the last option overwrites anything before it then the cli command is valid with the option disabled.
And now, correct behaviour relies on all the code using that option to always make the same assumption.
Parsing, on the other hand, would put create a new config where `option` is an enum - either enabled or disabled or not given. No confusion about multiple flags or anything. It provides a single view for the rest of the program of what the input config was.
Whether that parsing is done by a third party library or first party code, declaratively or imperatively, is besides the point.
Fwiw, the two feed into each other. I read this same thing somewhere else before and have since tried to not take my phone with me into the bathroom. I started doing that because I would take a lot of time in there. But once I have my phone, if I start reading something a bit too long, I could easily spend 20 mins in there before realising it.
I want an option to give fake permissions. A lot of apps are pretty necessary (due to network effects). I don't want to give my contact or location data to them but they also refuse to work without it, even though they don't it for the stuff I am doing. So just let me provide fake data instead. As far as the app is concerned, it has the permissions it so wanted.
That used to exist, but it's bad UX, because the user doesn't understand why the app they didn't give permissions to doesn't work well, and gives it a bad review. It's better UX for the app to say "I can't work without this permission", though it's worse for tech-savvy users.
> It's better UX for the app to say "I can't work without this permission", though it's worse for tech-savvy users.
The app shouldn't get to decide what permissions it "can't work without." That's how you get calculator apps that claim they can't possibly work without GPS location.
Why? The user might want to just browse the map without displaying his location on it. The user might want to just provide an address instead of his own location (assuming that function exists in the app). Why not just let the user run it, and let whatever actually needs the permission fail gracefully? Whether or not the app functionality lost is worth providing access to data gated by a permission should be up to the user, not up to the developer.
Because giving the app fake location data when the permission is denied leads to the user complaining that the map isn't showing their correct location. Apps don't want to get a bad rating for user error.
For the current category of LLM based AI, "AI optimised" means "old and popular". Even if you add a layer that has much more details but may be a lot more verbose or whatever, that layer would not be "AI optimised".
Instead of reference counting, consider having two types. An "owner" type which actually contains the resource and the destructor to dequire the resource. And "lender" types which contain a reference (a pointer or just logically (e.g., an fd can just be copied into the lender but only closed by the owner) to the resource which don't dequire on destruction.
Same thing as what Rust does with `String` and `str`.