Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Edit: GPT4 fails If I remove the sentence between (()).

If you remove that sentence, nothing indicates that you can see you picked the door with the car behind it. You could maybe infer that a rational contestant would do so, but that's not a given ...



I think that's meant to be covered by "transparent doors" being specified earlier. On the other hand, if that were the case, then Monty opening one of the doors could not result in "revealing a goat".


> You're presented with three transparent closed doors.

I think if you mentioned that to a human, they'd at least become confused and ask back if they got that correctly.


> You're presented with three transparent closed doors.

A reasonable person would expect that you can see through a transparent thing that's presented to you.


A reasonable person might also overlook that one word.


"Overlooking" is not an affordance one should hand to a machine. At minimum, it should bail and ask for correction.

That it doesn't, that relentless stupid overconfidence, is why trusting this with anything of note is terrifying.


Why not? We should ask how the alternatives would do especially as human reasoning is machine. It’s notable that the errors of machine learning are getting closer and closer to the sort of errors humans make.

Would you have this objection if we for example perfectly copied a human brain in a computer? That would still be a machine. That would make similar mistakes


I don't think the rules for "machines" apply to AI any more than they apply to the biological machine that is the human brain.


its not missing that it's transparent, it's that it only says you picked "one" of the doors, not the one you think has the car




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: