Which is good, and is definitely a start (and more than I've heard any others doing) - but the only long term solution is that all non-generated images (ie real photos) are signed as such.
Otherwise malicious actors will just strip the metadata out of the generated one.
> the only long term solution is that all non-generated images (ie real photos) are signed as such.
How could that possibly work? In order to mean anything, it would have to be signed by some independent entity who can verify the photo is real. How would that entity be able to tell if it is? And what would be the ramifications of requiring everyone who takes pictures to get them signed?
Even if it were an effective solution, isn't it just pushing costs being created by AI onto uninvolved others?
The camera uses a certificate to sign the authenticity of the photo taken and some metadata. Any editing app wraps that signature with its own signature stating it was modified. You end up with a chain of provenance. And any generated photo is either missing a chain, or when un-wrapping, its origin cert isn't from a camera.
I guarantee that the cert in the camera will be extracted and used to make fake "real pictures" sooner or later, or that fake pictures will be fed to the camera disguised as the camera's sensor data.
There are just too many ways this approach can be subverted.
GPS location, depth, exposure settings, focus distance could be incorporated into the signed data. Also if the image is high enough resolution you could pick out individual pixels/subpixels in the screen.
How would we avoid malicious actors finding a way to sign generated images? The more trust people have in signed images, the greater the incentive to crack the system.
> For example, most social media platforms today remove metadata from uploaded images, and actions like taking a screenshot can also remove it. Therefore, an image lacking this metadata may or may not have been generated with ChatGPT or our API.
> OpenAI is adding watermarks to images created by its DALLE-3 AI in ChatGPT, but it’s ridiculously easy to remove them. So easy, that ChatGPT itself will show you how to do it.
Otherwise malicious actors will just strip the metadata out of the generated one.