Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You are on a fool's errand: not because of the likelihood you will succeed, but because of the meaning that lies behind any success or failure.

GPT is not a person. It doesn't categorize subjects. It models patterns of text.

A success would mean that your text prompts left a significant text pattern in the model. A failure would mean that it didn't.

Nothing about that has any bearing on logic.



Why do you say that? Obviously it's not a person, it's just stats (not even logic).


It's not even statistics: those are made by associating a data point to a scale. That would require explicit association, which is not in any part of the process.

An LLM is 100% inferred patterns.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: