At a time when the US is preparing for the presidential election, the company OpenAI presented its plans to crack down on election-related disinformation, while focusing on increasing the transparency of the origin of information. One of the measures is using cryptography to encode images generated by DALL-E 3. This will allow the platform to better detect AI-generated images so voters can assess the credibility of certain content.
This approach is similar to, if not better than, DeepMind’s SynthID, which is part of Google’s own election content strategy. And Meta’s artificial intelligence image generator also adds an invisible watermark to its content.
OpenAI said it will soon work with journalists, researchers and platforms on feedback to your measures. In the same vein, they will begin to appear in real time to ChatGPT users news from around the world complete with note and links. If they ask basic questions (such as where to vote or how to vote), they will be directed to CanIVote.org, the official online resource for voting in the US.
In addition, OpenAI is reiterating its existing policies to stop attempts to impersonate voters in the form of deepfakes and chatbots, as well as content created to disrupt the electoral process or discourage people from voting. So does the company bans apps created for a political campaign and, if necessary, its new GPTs allow users to report potential violations.
OpenAI says learnings from these measures (if successful) will help it implement similar strategies around the world.