Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've been noticing similar patterns as well.

One instructive example was when I was implementing a terraform provider for an in-house application. This thing can template the boilerplate for a terraform resource implementation in about 3-4 auto completes and only gets confused a bit by the plugin-sdk vs the older implementation way. But once it deals with our in-house application, it can guess some things, but it's not good. Here it's ok.

In my private gaming projects on Godot... I tried using CoPilot and it's just terrible to the point of turning it off. There is Godot code out there how an entity handles a collision with another entitiy, and there are hundreds of variations out there, and it wildly hallucinates between all of them. It's just so distracting and bad. ChatGPT is OK at navigating the documentation, but that's about it.

If I'm thinking about my last job, which -- don't ask why -- was writing Java Code with low-level concurrency primitives like thread pools, raw synchronized statements and atomic primitives... if I think about my experience with CoPilot about code like this, I'm honestly feeling strength leaving my body because that would be so horrible. I've spend literal months chasing a once-in-a-billion concurrency bug in that code once.

IMO, the most simple framework-fill-in code segment will suffer from LLMs. But a well-coached junior can move past that stage quite quickly.



Yeah I basically treat LLM as a better Google search. It is indeed a lot better than Google if I want to find some public information, but I need to be careful and double-check.

Other than that it completely depends on luck I guess. I'm pretty sure if companies feed in-house information to it that will make it much more useful, but those agents would be privately owned and maintained.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: