Hacker News new | past | comments | ask | show | jobs | submit | douglasisshiny's comments login

Okay, how am I supposed to use them "correctly"? Because me explaining step by step, more so than a junior developer, how to do a small task in an existing codebase for it to get it wrong not once, not twice, not three times, but more is not a productivity boost.

And here's the difference between someone like me and an LLM: I can learn and retain information. If you don't understand this, you don't have a correct understanding of LLMs.


It is entirely true that current LLMs do not learn from their mistakes, and that is a difference between eg an LLM and a human intern.

It is us, the users of the LLMs, that need to learn from those mistakes.

If you prompt an LLM and it makes a mistake, you have to learn not to prompt it in the same way in the future.

It takes a lot of time and experimentation to find the prompting patterns that work.

My current favorite tactic is to dump sizable amounts of example code into the models every time I use them. I find this works extremely well. I will take code that I wrote previously that accomplishes a similar task, drop that in and describe what I want it to build next.


You seem to be assuming that the thing I'm learning is not "Stop using LLMs for this kind of work".

Is it not a bit weird to freely give give away your entire code base (I assume it's personal, not your company's, but maybe I'm wrong) to an entity like Google?


As a business owner that uses Cursor, this is a real risk that I worry about (third parties stealing my code). However, the massive productivity benefit of having access to AI tools far outweighs the risk of them copying my business based on the code alone. Besides, AI is making code less and less valuable. My code is not the moat -- the hard part is the network, traction, brand, distributions, etc.


Do you have actual data showing cursor (or any LLM) is a massive productivity benefit for coding? What are the heuristics?


How common is it to have a personal project that isn't open source? Probably more common than I think, but it seems like a foreign concept to me.

Either my code isn't commercialized so I don't mind "giving" it away, or it is commercialized but wouldn't be safe from a clean room implementation anyway. Isn't that what bigco would do of they really wanted to steal your idea?


This whole thread has to be satire.


Have something else write code for you to be a better programmer? Yeah.... no, that's not how it works


It's been refreshing to read these perspectives as a person who has given up on using LLMs. I think there's a lot of delusion going on right now. I can't tell you how many times I've read that LLMs are huge productivity boosters (specifically for developers) without a shred of data/evidence.

On the contrary, I started to rely on them despite them constantly providing incorrect, incoherent answers. Perhaps they can spit out a basic react app from scratch, but I'm working on large code bases, not TODO apps. And the thing is, for the year+ I used them, I got worse as a developer. Using them hampered me learning another language I needed for my job (my fault; but I relied on LLMs vs. reading docs and experimenting myself, which I assume a lot of people do, even experienced devs).


When you get outside the scope of a cruddy app, they fall apart. Trouble is that business only see crud until we as developers have to fill in complex states and that's when hell breaks loose because who tought of that? Certainty not your army of frontend and backend engineers who warned you about this for months on end.....

The future will be of broken UIs and incomplete emails of "I don't know what to do here"..


The sad part is that there is a _lot_ of stuff we can now do with LLMs, that were practically impossible before. And with all the hype, it takes some effort, at least for me, to not get burned out on all that and stay curious about them.

My opinion is that you just need to be really deliberate in what you use them for. Any workflow that requires human review because precision and responsibility matters leads to the irony of automation: The human in the loop gets bored, especially if the success rate is high, and misses flaws they were meant to react to. Like safety drivers for self driving car testing: A both incredibly intense and incredibly boring job that is very difficult to do well.

Staying in that analogy, driver assist systems that generally keep the driver on the well, engaged and entertained are more effective. Designing software like that is difficult. Development tooling is just one use case, but we could build such _amazingly_ useful features powered by LLMs. Instead, what I see most people build, vibe coding and agentic tools, run right into the ironies of automation.

But well, however it plays out, this too shall pass.


Or someone who has been a developer for a decade plus trying to use these models on actual existing code bases, solving specific problems. In my experience, they waste time and money.


These people are the most experienced, yes, but by the same token they also have the most incentive to disbelieve that an AI will take their job.


To be more clear, it's operatives of the Heritage Foundation who now work in the government putting this into place. Does anyone think Trump actually does much day to day? He often seems completely unaware of what's going on in his own government. I invite anyway to watch his evening press conferences where he's handed a bunch of Executive Orders, is told what he's signing (he has no clue), and signs it.


Too bad there's not a maximum age for being elected president.


I think the point is you could just.... write the code yourself.


And I do, most of the time. Sometimes the code to be written is quite verbose and I am on a tighter deadline. And it still takes less time and effort to vet the LLM's code. Otherwise yeah.


I've had the same experience as the person to whom you're responding. After reading your post, I have to ask: if you're putting so much effort into prompting it with specific points, correcting it often, etc., why not just write the code yourself? It sounds like you're putting a good deal of effort into prompting it.

Aren't you worried that overtime you'll rely on it too much and your offhand knowledge will get worse?


I'm still spending less effort/time. A very significant amount.

I do write plenty of things myself. Sometimes, I ignore AI completely and write 100s of lines. Sometimes, I take copilot suggestions every other line, as I'm writing something "common" and copilot can "read" my mind. And sometimes, I write 100s of lines purely by prompting. It is a fine line when to do which; also depends on mood.

I am not worried about that as I spend hours everyday reading. I'm also the type of person who, when something is needed in a document, do not search for it using CTRL+F, but manually look thru it. It always takes more time but I also learn adjacent things to the topic I need.

And I never commit a single line from AI without reading and understanding it myself. So it might come up with 100 line solution for me, but I probably already know what I wanted and off chance it came up with something correct but in a way I did not know, I do read and learn it.

Ultimately, to me, the knowledge that I can !reset null in docker compose override is important. Remembering if it is !null reset or reset !null or !reset null (i.e., syntax) is not important. My offhand knowledge is not getting worse as I am constantly learning things; I just focus less on specific syntaxes or API signatures now.

You can apply the same argument with IDE. Almost all developers will fail to write proper JS/TS/Java etc without IDE help.


I have read somewhere, that LLMs are mostly helpful to junior developers.

Is it possible the person claiming success with all these languages/tools/technologies is just on a junior level and is subjectively correct but has no point of reference how fast coding is for seniors and how quality code looks like?


1. You have read somewhere...but you don't have any experience. LLMs are really bad for Junior developers.

  A. They have no skills/exp to judge AI output
  B. They don't learn from sudden magical wall of code output
  C. They don't get to explore; thus don't learn.
  D. Ultimately, LLMs act as a bad drug to them that keeps them dependent and stagnant.
  E. LLMs are really good for higher end of senior devs. This also means, they don't need as many Juniors anymore and they don't mentor juniors much. This is the biggest loss for Juniors.
2. I think you are absolutely right. I became a Staff Engineer with junior level LLM coding. More power to me I guess :(


Not OP, it be comes natural and doesn't take a lot of time.

Anyway, if you want to, LLMs can today help with a ton of programming languages and frameworks. If you use any of the top 5 languages and it still doesn't work for you, either you're doing some esoteric work or you're doing it wrong.


Could you point me to a youtube video or a blog post which demonstrates how LLMs help writing code which outperforms a proficient human?

My only conditions:

- It must be demonstrated by adding a feature on a bigger code base (>= 20 LOC)

- The added feature cannot be a leaf feature (means it must integrate with the rest of the system at multiple points)

- The prompting has to be less effort/faster than to type the solution in the programming language

You can chose any programming language/framework that you want. I don't care if it is Java, JavaScript, Typescript, C, Python, ... hell, I am fine with any language with or w/o a framework.


I'm in the same boat. I've largely stopped using these tools other than asking questions about a language that I'm less familiar with or a complex type in typescript for which it can be helpful (sometimes). Otherwise, I felt like I was just wasting my time and becoming lazier/worse as a developer. I do wonder whether LLMs have hit a wall and we're in a hype cycle.


Yes, I have the same feeling about the wall/hype cycle. Most of my time is understanding code and formulating a plan to change code w/o breaking anything... even if LLMs would generate 100% perfect code on the first try, it would not help in a big way.

One thing I forgot to mention is asking LLMs questions from within the IDE instead of doing a web search... this works quite nice, but again, it is not a crazy productivity boost.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: