Meta is planning improve the identification of images generated by artificial intelligence on Facebook, Instagram and Threads to make it immediately clear to all users that these are unreal images. It is part of a wider one efforts to suppress disinformationespecially now, at a time when elections will be held in the USA.
According to Nick Clegg, Meta’s president of global affairs, the company will start in the coming months mark images, videos or audio clips generated using artificial intelligence with labels in all languages supported by individual applications.
Meta says the tools it’s working on will be able to detect invisible marksi.e. information created by artificial intelligence that complies with C2PA and IPTC technical standards, on a large scale. It expects to be able to pinpoint and tag images from companies Google, OpenAI, Microsoft, Adobe, Midjourney a Shutterstockwhich include generative artificial intelligence metadata in images.
As for video and audio, Clegg explained that companies haven’t yet started using invisible tags on AI creations on the same scale as they do with images. Meta so far is unable to detect video and audio generated by third-party AI tools. The company expects users to be such content label yourself. Essentially, it will force users to tag any video or audio created by GAI, or risk penalties if they don’t.
However, requiring users to add information and labels to video and audio recordings is likely to fail. It is likely that many of them will simply try deliberately deceive others and others simply won’t bother with the policies, or won’t know about them.
Meta also tries to users make it more difficult to modify or remove invisible tags from GAI content. The company’s FAIR AI research laboratory has developed a technology that integrates the watermark mechanism directly into the image generation process for some types of generators, which could be valuable for open-source models so that the watermark cannot be deleted.