Just a few days ago, OpenAI’s usage policy page specifically stated that the company forbids the use of its technology for military and war purposes. Now this was it row deleted. As first spotted by The Intercept, the company updated the page on January 10. It still prohibits the use of its Large Language Models (LLM) for anything that can cause harm, and warns against using its services to develop or use weapons. However, the company removed the sentence referring directly to military and war purposes.

Interestingly, this change comes at a time when military agencies around the world are demonstrating interest in the use of artificial intelligence. “Given the use of AI systems in attacks on civilians in Gaza, the decision to remove the words military and war from the OpenAI Acceptable Use Policy is a remarkable moment,” said Sarah Myers West, executive director of the AI ​​Now Institute.

The explicit mention of military and warfare in the list of prohibited uses indicated that OpenAI could not work with government agencies such as the Department of Defense, which typically offer lucrative contracts to contractors. Currently, the company does not have a product that can directly kill or physically harm someone. But as The Intercept reported, the technology could be used for tasks that could lead to killing people.

When asked about the policy wording change, OpenAI spokesperson Niko Felix said the company’s goal “was to create a set of universal policies that are easy to remember and apply“. OpenAI is said to specifically state that AI must not be used to develop weapons and harm others, however a spokesperson reportedly declined to clarify whether the ban on using the technology to harm others includes all types of military use outside of weapons development.

Source: engadget.com