At a time when fraudsters are using generative artificial intelligence for financial fraud and reputational damage, technology companies are coming up with methods to help users authenticate digital content – at least still images to begin with. OpenAI now includes provenance metadata in images generated using ChatGPT and the DALL-E 3 APIwith their mobile counterparts receiving the same update by February 12.

Metadata follows the open standard C2PA (Coalition for Content Provenance and Authenticity), a when you upload one such image to Content Credentials Verify, you’ll be able to trace its origin. For example, an image created using ChatGPT will have an initial metadata manifest indicating its origin in the DALL-E 3 API, followed by a second metadata manifest indicating that it appeared in ChatGPT.

Despite the fancy cryptographic technology behind the C2PA standard, this verification method only works if the metadata is intact; the tool is useless if you upload an AI-generated image with no metadata – which is the case with any screenshot or uploaded image on social media. On your FAQ page OpenAI admits that this is not a large-scale solution to disinformation problemsbut believes the move may encourage users to actively seek out the origins of images.

While OpenAI’s latest effort to stop fake content is currently limited to static images, Google’s DeepMind already has SynthID technology for digitally tagging images and sounds generated by artificial intelligence. Meanwhile, Meta is testing invisible watermarks through its AI image generator that may be less susceptible to manipulation.