I think the weight should be expeessed in average yearly banana haevests to keep the numbers low and comprehensible. It's going to be intuitive, I promise.
I dont get the hype. Installed it at my Framework laptop, instead the usual xfce. Imho, it tries hard to be too smart, and second guess my intentions. Basic stuff like alt f4 doesn't work for some reason. I just couldn't bother to learn another desktop environment, so here goes xfce again
FW did have a keyboard bug early on that affected function keys. Had to pass a param to the kernel to work around. Not sure it is still an issue on recent kernels, and haven’t thought of it in a year or two.
Alt+F4 is bound to "close window" by default, so I don't know why that didn't work for you. Something I really enjoy with KDE is I can reconfigure practically all keyboard shortcuts. I use meta+(numpad+/-) to change volume.
Do modern compiler (register allocation/ instruction generation) involve some kind of integer programming or constraint solving? I vaguely remember compilers using Z3 solver
what is this sorcery. I was reading HN for years, this is the first time I see someone brings up a memory safe C++. how is that not even on the headlines ? what's the catch, build times ? do I have to sell my house to get it?
EDIT: Oh, found the tradeoff:
hollerith on Feb 21, 2024 | prev | next [–]
>Fil-C is currently about 200x slower than legacy C according to my tests
The catch is performance. It's not 200x slower though! 2x-4x is the actual range you can expect. There are many applications where that could be an acceptable tradeoff for achieving absolute memory safety of unmodified C/C++ code.
But also consider that it's one guy's side project! If it was standardized and widely adopted I'm certain the performance penalty could be reduced with more effort on the implementation. And I'm also sure that for new C/C++ code that's aware of the performance characteristics of Fil-C that we could come up with ways to mitigate performance issues.
The design choices that make it slower can’t be mitigated either in theory or practice. C++ competes against other languages that make virtually identical tradeoffs that are far more mature and which have never closed that performance gap in a meaningful way.
For the high-end performance-engineered cases that C++ is famously used for, the performance loss may be understated since it actively interferes with standard performance-engineer techniques.
It may have a role in boring utilities and such but those are legacy roles for C++. That might be a good use case! Most new C++ code is used in applications where something like Fil-C would be an unacceptable tradeoff.
It won't ever be 1x the runtime, but if you think better than 2x is an insurmountable challenge then we will have to agree to disagree on that one. And for high-end performance engineering, I expect code that is aware of Fil-C could do better than naive code simply compiled with Fil-C.
I mean, if I could accept a 2x-4x performance hit, then I wouldn't be using C++ in the first place. At that point, there are any of a number of other languages that are miles more pleasant to program in.
There are a whole lot of C/C++ libraries and utilities that would be great to use in a memory safe context without rewriting them. And it's not exactly easy to reach 2x C's runtime in most other languages. But again, I think that performance penalty could be significantly reduced with more effort on the implementation.
Latest version of Fil-C with -O1 is just around 50-100% slower than ASAN, very acceptable in my book. I'm actually more "bothered" by its compilation time (took roughly 8x time of clang with ASAN).
I’ve found Fil-C 0.670 to be around 30x slower than a regular build, with -O1 vs. -O2 not making much difference. Perhaps it is very dependent on the kind of code. IIRC the author of Fil-C (which I think is an incredible project, to be clear) wants it to be possible to run Fil-C builds in production, so I think the comparison to a regular non-ASAN build is relevant.
Rolled my eyes on "For experts only: don't do it yet". Shut-up already. I will do it right now because it will nag me forever and then surface will grow, and every time the new code interacts whith what-could-ve-been-optimized I will spend 5min thinking if I should already optimize it
I used to work on FAANG-scale optimizations. Generally, the only code-level optimizations worth bothering with are ones with inefficient (think O(n^2) vs O(n) and caching) algorithms. Those get you because they work during testing, but fall over in prod. You should also optimize DB and RPC access patterns when appropriate. As things scale, there can be larger architectural changes that help. Finally, there are cute micro optimizations that everyone thinks will matter. They don't.
This is like the #1 cause of spaghetti, unmaintanable, deadlocked codebases - a single developer who knows every “best-practice” and optimization technique, and will not hesitate to apply it in every situation regardless of practicality or need, as a way to demonstrate their knowledge. It's the sign of an insecure developer - please stop.
If there's a definite performance problem and a simple solution, then sure, go ahead. But applying every optimisation that comes to mind can produce a dog's breakfast of unmaintainable code and then when a real performance problem comes along, it can be really hard to fix.
reply