Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
OpenAI integrates metadata in DALL-E 3 images (ChatGPT and APIs) (help.openai.com)
23 points by 037 on Feb 6, 2024 | hide | past | favorite | 10 comments


Which is good, and is definitely a start (and more than I've heard any others doing) - but the only long term solution is that all non-generated images (ie real photos) are signed as such.

Otherwise malicious actors will just strip the metadata out of the generated one.


> the only long term solution is that all non-generated images (ie real photos) are signed as such.

How could that possibly work? In order to mean anything, it would have to be signed by some independent entity who can verify the photo is real. How would that entity be able to tell if it is? And what would be the ramifications of requiring everyone who takes pictures to get them signed?

Even if it were an effective solution, isn't it just pushing costs being created by AI onto uninvolved others?


> How could that possibly work?

The camera uses a certificate to sign the authenticity of the photo taken and some metadata. Any editing app wraps that signature with its own signature stating it was modified. You end up with a chain of provenance. And any generated photo is either missing a chain, or when un-wrapping, its origin cert isn't from a camera.


I guarantee that the cert in the camera will be extracted and used to make fake "real pictures" sooner or later, or that fake pictures will be fed to the camera disguised as the camera's sensor data.

There are just too many ways this approach can be subverted.


How does that prevent taking a photo of a generated image, gaining the camera signature?


GPS location, depth, exposure settings, focus distance could be incorporated into the signed data. Also if the image is high enough resolution you could pick out individual pixels/subpixels in the screen.


How would we avoid malicious actors finding a way to sign generated images? The more trust people have in signed images, the greater the incentive to crack the system.


> For example, most social media platforms today remove metadata from uploaded images, and actions like taking a screenshot can also remove it. Therefore, an image lacking this metadata may or may not have been generated with ChatGPT or our API.


Few minutes ago, on Discord:

"Images generated in ChatGPT and our API now include metadata using C2PA specifications.

This allows anyone (including social platforms and content distributors) to see that an image was generated by our products.

Read more in our help article here: https://help.openai.com/en/articles/8912793-c2pa-in-dall-e-3 "


> OpenAI is adding watermarks to images created by its DALLE-3 AI in ChatGPT, but it’s ridiculously easy to remove them. So easy, that ChatGPT itself will show you how to do it.

https://www.forbes.com/sites/barrycollins/2024/02/07/the-stu...




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: