Valve has released new rules for game developers to publish games using artificial intelligence technology on the Steam platform. Which means developers will have to disclose when their games use it. It seems that the goal of the changes is increase the transparency of the use of artificial intelligence in Steam games and at the same time offer protection against risks associated with the use of AI-generated content and enable customers to make an informed decision about whether to purchase an AI-powered game.
Under the new rules, developers will have to publish if the game contains pre-generated content created using artificial intelligenceand promise that not illegal or infringing copyright. They will also have to state whether their game offers content generated by artificial intelligence in the process, i.e. at runtime. It is in this case that developers will need to detail the security measures they have put in place to prevent their AI from generating illegal content. Players will be able to see if a game contains AI on the game’s store page, and will have new options to report illegal AI-generated content if they come across it in-game.
While some developers have enthusiastically incorporated the new technology into their games and production processes, the wider industry is divided on the use of generative artificial intelligence. On the one hand, several studies have talked about the use of AI as helping with game testing, creating early concepts, or helping with the expensive parts of the game development process, such as recording actors’ voices. Others, however, fear that artificial intelligence would could be used to cheaply replace existing artists and other creators (reports of this have already begun to emerge), and opposes companies that make AI-generated assets public.
According to Valve’s blog post, its stance and rules regarding AI-generated content are likely to change as the technology and its legal framework evolves.