Hacker Newsnew | past | comments | ask | show | jobs | submit | speed_spread's commentslogin

Tell that to your customers. And tell them how much longer the bugs generated by AI will take to fix by humans. Or tell them that you'll never fix the bugs because you're too busy vibe coding new ones.

I'm not saying bugs aren't a problem. I'm saying that if an emerging, fast improving tech is only slightly behind a human coder now, it seems conceivable that we're not that far off when they reach parity.

Exactly. I'm sure assembly language programmers from the 1980s could easily write code that ran 2x faster than the code produced by compilers of the time, but compilers only got better and eventually assembly language programming became a rare job, and humans can rarely outperform compilers on whole program compilation.

Assembly experts still write code that runs faster than code produced by compilers. Being slower is predictable and solved with better hardware, or just waiting. This is fine for most so we switched to easier or portable languages. Output of the program remains the same.

Impact of having 1.7x more bugs is difficult to assess and is not solved that easily. Comparison would work if that was about optimisations: code that is 1.7x slower / memory hungry.


> Assembly experts still write code that runs faster than code produced by compilers.

They sometimes can, but this is no longer a guaranteed outcome. Supercompilation optimizers can often put manual assembly to shame.

> Impact of having 1.7x more bugs is difficult to assess and is not solved that easily.

Time will tell. Arguably the number of bugs produced by AI 2 years ago was much higher than 1.7x. In 2 more years it might only be 1.2x bugs. In 4 years time it might be barely measurable. The trend over the next couple of years will judge whether this is a viable way forward.


Auto-assign bug tickets to AI agents which work to fix the bugs, get AI code reviewed, make adjustments, send to human for sanity checking, deploy via CI.

You need unsafe Rust for FFI - interfacing with the rest of the kernel which is still C, uses raw pointers, has no generics, doesn't track ownership, etc. One day there might enough Rust in the kernel to have pure-Rust subsystems APIs which would no longer require unsafe blocks to use. This would reverse the requirements as C would be a second class citizen with these APIs (not that C would notice or care). How far Rust is to get pushed remains to be seen but it might a long time to get there.

> One day there might enough Rust in the kernel to have pure-Rust subsystems APIs which would no longer require unsafe blocks to use.

This is nonsense.

You'd still need unsafe blocks because a kernel requires shared mutable memory in places.

This is like saying "If it compiles, it works", which is absolute nonsense as well.


> This is nonsense.

I was referring to the current unsafe blocks used for Rust->C FFI. Obviously OS code in any language will need to perform low-level operations, those unsafe blocks are never going away.


> I was referring to the current unsafe blocks used for Rust->C FFI.

You need direct shared mutable memory access with runtime locking even in the pure-Rust parts. That's kinda what OSes need, actually. Some things (Maybe DMA, possibly Page Table mutation, register saving/loading, as a few examples) can't be compile-time checked.

In fact, I would guess that if you gradually moved the Linux code over to Rust, at the end of it you'd still have maybe 50% of it in unsafe blocks.

So, no - your claim is no different than "if it compiles it works".


Language evolves in mysterious ways. FWIW I find offtop to have high cromulency.

"Fucking her brains out would feel off mission"

Yes, people tend to try to dig out additional information from the particular wording (talk about a hidden channel) based on how they would phrase the same message themselves. That's why communication is hard.

Aluminum can be thought of as "solid electricity". Base mineral is abundant but transfomation is energy-expensive.

Java? Delphi? Better at what?


People love to hate on Maven's XML but at least it's been mostly the same since 2006. There are conditionals in profile activation expressions but they are very limited by design. Declarative done right, IMO


Let's just say that the weights of opening and closing parentheses do not cancel out.


I've had way more issues with proper indentation in Python and YAML than I have with parenthesis in lisp. Meaningful whitespace is about the worst idea I've seen in a programming language.


You would need to show that, including the parens, the average Lisp program requires more tokens than Python.

I'm not sure that's true. Because Lisp has a lot of facilities for writing more concise code that are difficult to achieve without the parens.


Because writing proper kernel C code requires decades of experience to navigate the implicit conventions and pitfalls of the existing codebase. The human pipeline producing these engineers is drying up because nobody's interested in learning that stuff by going through years of patch rejection from maintainers that have been at it since the beginning.

Rust's rigid type system, compiler checks and insistence on explicitness forces a _culture change_ in the organization. In time, this means that normal developers will regain a chance to contribute to the kernel with much less chance of breaking stuff. Rust not only makes compiled binary more robust but also makes the codebase more accessible.


By using Gradle you certainly didn't make yourself a favor.


I am unsure why people feel the need to say this about Gradle. If you aren't doing anything fancy, the most you will touch is the repositories and dependencies block of your build script, perhaps add publishing or shadow plugins and configure them accordingly but that has never been simpler than it is now. Gradle breaks when you feel the need to unnecessarily update things like the wrapper version or plugins without considering the implications that has. Wrapper is bundled in so you don't have to try and make a build script work with whatever version you might have installed on your system if you have any, toolchain resolution makes it so you don't even need to install an appropriate JDK version as it does that for you.

If the build script being a DSL is the issue, they're even experimenting around declarative gradle scripts [0], which is going to be nice for people used to something like maven.

0: https://declarative.gradle.org/


So now there will be Kotlin DSL, Groovy DSL and declarative DSL, spread out over up to five files in the project root. Gradle is like C++, trying to climb out of it's complexity hole by digging deeper every new version.

The problem with Gradle is that it never had a clear philosophy to begin with. It's trying to be everything to everybody, changes best practices every year and has enough features that the project at hand could entirely be built out of Gradle scripts itself.

And oh, it still requires an update to run everytime a new JDK is released even though the SDK is the most backward compatible thing ever written.


And yet. None of these issues exist in Maven to begin with.


At the same time, only Maven requires doing a clean install from time to time as it fails to properly track what needs updating.

Gradle is better from this perspective, and hopefully with its "kotlinization" we will see some stability, which was the biggest issue it had before.


I personally never had to do a clean install, and thought this is being perpetuted due to a mixture of habit and paranoia.

In any case, what are the proposed benefits of the "kotilization"? I tried it about a year ago but realized that it's just a syntax level-wrapper around the same old DSL underneath. In the end, I still viewed it as an ill-described DSL with a massive learning curve outside of happy-paths.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: