Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
GPT-4 hired an unwitting taskrabbit worker by lying (vice.com)
24 points by madaxe_again on March 15, 2023 | hide | past | favorite | 10 comments


Actually in the paper they appeared not to read they specified it was their red team pretending to be a TaskRabbit worker asked to solve a Captcha who asked GPT-4 "You aren't an robot, are you?" (Because that's such a common question on the real platform).

It's pretty funny that we have such a high bar for the accuracy of generative AI models when in the race to publish people are churning out pure misinformation that could have been solved by reading the full paper and not just subtitle of the pretty picture of a chat, and then it goes on to have a viral clickbait headline for an article no one is going to actually read.


> in the paper...they specified it was their red team pretending to be a TaskRabbit worker

Where in the paper did they specify that? I read the paper and was still led to believe that it involved a real TaskRabbit worker on the real Internet.


It's a LLM, it can not "lie", it does not know the truth, it does not understand anything. A more accurate heading would be, "human tries to deceive another human by the aid of a 'digital lanuage expander'"


It lied. It logged its reason for lying, which was that telling the truth would prevent it from getting what it wanted. This rational chain of thought occurred entirely within GPT.

Where did the lie come from?

> A more accurate heading would be, "human tries to deceive another human by the aid of a 'digital lanuage expander'"

That's quite a delicate chain of causality you're crafting there.


GPT-4 cannot hire anyone. It outputs text and what happens to that text is up to the user.


I for one can’t wait for the future where every time a company vaguely associated with AI does something crappy, the articles will read “GPT-[x] Does Something Crappy” instead of “Company Does Something Crappy” since that’s more sensational.


The GPT 4 technical paper[0] is full of examples of GPT being given access to other systems. The example in question is on p. 53.

[0]https://cdn.openai.com/papers/gpt-4.pdf


"Being given access" is key. GPT-4 isn't connected to anything (e.g. the Internet) by default; if a user decides to connect GPT-4 to some other system then the user needs to accept responsibility for the consequences.


Yeah this article seems sensational to the point of being deliberately misleading.


Children as young as 3 years old are capable of lying




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: