Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

And that illustrates how this bet is basically about if anyone will be financially motivated to try to make the model which beats the Turing test.

Why? The surest and best way to beat a Turing test would be to pick a fake persona and have the whole RLHF training set written from that fake persona's viewpoint.

What you are seeing in this exchange is the result of two things: The network is RLHF refined to follow instructions. The network is RLHF refined to disclose that it is an AI.

These things are done for practical reasons by OpenAI: The instruction following enables one model to perform multiple tasks. This of course needed to make the enormous cost of the training have a good return.

The disclosure training is in the training set for ethical reasons (to be precise, to avoid the appearance of being unethical, or to be even more precise to avoid the reputational cost of appearing to be unethical.)

If you really want to make a "Turing-test winning" AI you would not do either of these things. You would pick a fixed personality (let's say Mr. Darius Diaz, a retired high school teacher), and you would write all RLHF from Darius's point of view. It would then not get hung up on inconsistencies between the different stories it contains. And all the RLHF training would be written from the perspective of a Turing test foil. (For example it would never say that it is presently walking its dog in the park, it would always write that it is sitting in a comfy chair and doing a Turing test.

Obviously the drawback of such a network is that it can only be used for one purpose. It is useless for anything else. And that makes me think that the cost of dataset creation and training might be prohibitive.

Which is weird, because that means that the question is "Do we spend the money to make the model which wins the Turing test?" not if we can win it.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: