Hacker Newsnew | past | comments | ask | show | jobs | submit | anguyen8's commentslogin

There was an earlier post this year that showed SOTA models failed on counting the legs of 5-legged zebras and other counterfactual images.

https://news.ycombinator.com/item?id=44169413


Hello 0xab,

Sorry that we missed your work. There are a lot of works in this area both textual and visual, especially social biases.

We wish to mention all but the space is limited so one can often discuss the most relevant ones. We'll consider discussing yours in our next revision.

Genuine question: Would you categorize the type of bias in our work "social"?


It sounds like you ask multiple questions in the same chat thread/conversation. Once it knows that it is facing weird data or wrong in previous answers, it can turn on that "I'm facing manipulated data" mode for next questions. :-)

If you have Memory setting ON, I observe that it sometimes also answers a question based on you prior questions/threads.


https://imgur.com/cO7eFNt

o3 Chat is also similarly wrong, saying {4}.


Interesting set of fake Adidas logos. LOL

But models fail on many logos not just Adidas, e.g. Nike, Mercedes, Maserati logos, etc. as well. I don't think they can recall "fake Adidas logo" but it'd be interesting to test!


More results on arXiv paper: https://arxiv.org/abs/2505.16181


Generative AI (GenAI) holds significant promise for automating everyday image editing tasks, especially following the recent release of GPT-4o on March 25, 2025. However, what subjects do people most often want edited? What kinds of editing actions do they want to perform (e.g., removing or stylizing the subject)? Do people prefer precise edits with predictable outcomes or highly creative ones? By understanding the characteristics of real-world requests and the corresponding edits made by freelance photo-editing wizards, can we draw lessons for improving AI-based editors and determine which types of requests can currently be handled successfully by AI editors? In this paper, we present a unique study addressing these questions by analyzing 83k requests from the past 12 years (2013–2025) on the Reddit community, which collected 305k PSR-wizard edits. According to human ratings, approximately only 33% of requests can be fulfilled by the best AI editors (including GPT-4o, Gemini-2.0-Flash, SeedEdit). Interestingly, AI editors perform worse on low-creativity requests that require precise editing than on more open-ended tasks. They often struggle to preserve the identity of people and animals, and frequently make non-requested touch-ups. On the other side of the table, VLM judges (e.g., o1) perform differently from human judges and may prefer AI edits more than human edits.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: