Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not necessarily disagreeing with your points, or the general argument, but I think the trade-offs and consequences of choosing a language with automated memory management are reasonably well understood, and manageable, but can we say the same about vast amounts of AI written code?


Automated memory management was fiercely debated at the time. So were high-level programming languages, so much so that Paul Graham was moved to write "Beating the Averages".


Fair enough - I wasn't there at the time. Perhaps (obviously?) we're still at the debating stage of AI assisted coding. But I think we can say we now understand the trade-offs etc around memory managed languages.


I don't agree, but all of this misses the point I'm making, which is that the stuff AI automates away from you is no more depriving you of practice time than automated memory management, which also wholesale eliminated a huge part of the actual practice of programming. You could literally draw the same analogy to practicing scales with manually writing arena allocators.


I'm a bit surprised that you're surprised by my argument given your awareness of a similar argument in history (which happened quite a few times).

It is true that the high-level languages "won out", and now even maybe scripting languages (Python is incredibly popular), but that also doesn't mean the concerns of the time were unwarranted. Many of those concerns did come true. Which, honestly, I don't think anyone was arguing that we couldn't continue moving forward with high level languages, but rather than we'll be able to write code faster but at the expense of hardware and that this can lead to enshitification. Though no one used that exact term. I'm not convinced most professional software isn't extremely resource inefficient and there isn't a lot of enshitification.

  > Paul Graham was moved to write "Beating the Averages"
I mean look at the article

  This is the same argument you tend to hear for learning Latin. It won't get you a job... but it will improve your mind, and make you a better writer in languages you do want to use, like English.
The reason learning Latin helps with English is because you start to learn underlying structures. It would still benefit your English skills even if people spoke it regularly. These are orthogonal properties. It benefited Graham because it was so powerful. But that power is getting you to think with lambda calculus.

I know Lisp is a high level language, but there is more to all of this than such a simple reduction. I mean C and C++ are high level programming languages. Even Python was mentioned in that article! So if your takeaway from it was that high level languages are the winners then I think you've missed most of the story.

Though I find his article starts to lose coherence as it moves into The Blub Paradox. There's no real definition of "power" and he weirdly is conflating it with how high level a language is. Which plenty of scripting languages like Python and Perl were available at that time. So if he's saying that Lisp was the most powerful, then it isn't about how "high" level the language is, which is the metaphor he moves into. It's so weird to hear his argument about the Blub programmer looking "up" the power continuum and thinking those languages are just as powerful. I think a lot of people think other languages are more powerful than the ones they usually write it (or even know). Hell, it is usually the driving feature to learn a language.

The real reason no one writes in Lisp was already answered. No one understands it. And people will make compromises to ensure they can understand one another. This also is a self-reinforcing paradigm though...


I think there just isn't a way around having a coherent argument for why writing reference counts on structs by hand wasn't important to developing coding fluency, but rote re-typing of the same build systems, main() scaffolding, and trivial SQL queries is. If anything, it seems like the opposite thing is likelier to be the truth.


  > but rote re-typing of the same build systems
I hear this argument a lot with coding agents but I just don't understand it. Why are you rewriting so much code? Scaffolding like main? I'm not going to buy that automating 1-4 lines of code is a real time sink. If it is, we got macros for that. If you're repeating lines, this sounds like what functions and classes are for. If your code is having lots of rewriting of things then it is just going to become an unmaintainable mess as a function is written in a dozen different places. Should they not just be packaged into a library? Long variable or function names? We've had autocomplete for decades. Including autocomplete that will do lines and be contextually dependent. There's better solutions to lots of things like templates and functors that allow us to move through abstraction to greatly minimize re-typing. These solutions reduce lines of code, increase flexibility, and vastly reduce the need to re-type things.

I'll admit that coding agents seem great if you want to measure a programmer's output by the number of lines of code they generate, but writing lines of code is just a small portion of the work.


If you believe that, you clearly don't believe what the commenter that kicked this subthread off believes, that all this typing and going through the motions is important for building fluency. I'm happy to agree with you about that and move on.


  > you clearly don't believe what the commenter that kicked this subthread off believes
You mean me? I'm not being hypocritical. We just have different notions of what those "motions" are.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: