Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I could say the same about you


I know it's just a misplaced joke, but we all have in our brains a language prediction engine. It's just that we have so much more.


On here though you are basically just an LLM.


On the internet, nobody knows you’re a dog


I guarantee you, I’m not doing any math in my head. Its at best a rough estimation.


What happens in our heads can certainly be expressed mathematically. Since neurons are activated by electricity in a predictable way, and electrical signals can be measured and their strength expressed with numbers, any neural system including the brain can be modeled with a mathematical function that maps inputs to outputs.

But the type of conscious, methodical calculation that humans do when they are "doing math" is not involved here, and is not how GPT works.


Human commenters have to make the choice to reply at all. This is an option not on offer to these new writers, which are compelled to always generate something. So there is one key difference still.


Sydney did in fact stop replying to users multiple times (e.g. [1]), if I understood the reports correctly. I assume it just generates only an "end of output" token or something similar.

[1]: https://www.reddit.com/r/ChatGPT/comments/112hxha/how_to_mak...


Heh, there were plenty of people who got Sydney to just refuse to talk to them any more. It would just generate blank responses.

So, got something else?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: