Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you treat it as a surpercharged search engine on a compressed snapshot of the internet in 2021, then its quite useful. If theres ever a function I forgot, but I know how to explain what it does in natural language, chatGPT most of the time can find me what I'm looking for. On some more obscure bugs or if I'm sorting through a new codebase, chatGPT can help me out from time to time to understand new topics.

Of course, we shouldn't rely on chatGPT. It has give me wrong and insecure code before. However, its a nice tool to have around



>Of course, we shouldn't rely on chatGPT. It has give me wrong and insecure code before. However, its a nice tool to have around

You 100% should verify any code generated by ChatGPT - but this goes for any code found off the internet. I have come across bad Stack Overflow code committed to codebases before.


I think what's making these hyperinformed hallucinators useful for coding right now is that fact checking is fundamentally a big part of our work. There's a bare minimum that must be done to even think anything has been accomplished at all, that it runs and does what we expected on the most basic input, but then we are also used to writing tests and reading code critically.


I wonder what a learning model with the ability to test feedback from executing its code in an interpreter would look like? I know there are different groups testing things like integrating the ability for LLMs to use tools, wondering how this will pan out in the end.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: