OpenAI removes ban on military use of AI tools for national security scenarios

The move follows OpenAI forming a team, in October 2023, to combat “catastrophic risks” arising from the development of AI models.

Kurt Robson January 17 2024

OpenAI has deleted part of its terms and conditions which prohibited the use of its AI technology for military and warfare purposes.

An OpenAI spokesperson told Verdict that while the company’s policy does not allow its tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property, there are, however, national security use cases that align with its mission. “For example, we are already working with DARPA to spur the creation of new cybersecurity tools to secure open source software that critical infrastructure and industry depend on,” said the spokesperson adding: “It was not clear whether these beneficial use cases would have been allowed under “military” in our previous policies. So the goal with our policy update is to provide clarity and the ability to have these discussions.” The ChatGPT maker’s usage policy initially included a ban on any activity that included “weapons development” and “military warfare”. However, the new update that went live on 10 January, did not include the ban on “military and warfare”. OpenAI left the blanket ban on using the service “to harm yourself or others” with an example included of using AI to “develop or use weapons”. “We’ve updated our usage policies to be more readable and added service-specific guidance,” OpenAI said in a blog post. “We cannot predict all beneficial or abusive uses of our technology, so we proactively monitor for new abuse trends,” the blog post added. Sarah Meyers, managing director of the AI Now Institute, told the Intercept that AI being used to target civilians in Gaza makes now a notable time for OpenAI to change their terms of service.
Fox Walker, analyst at research company GlobalData, told Verdict that the new guidelines “could very well lead to further proliferation of AI use in defence, security, and military contexts.” “Whether it be the use of non-lethal technology, the development of military strategy, or simply the use of budgeting tools, there are many areas where AI can assist military leaders without causing harm to others or creating new weapons,” Walker said. In October, OpenAI formed a new team to monitor, predict, and try to protect against “catastrophic risks” posed by AI such as nuclear threats and chemical weapons. The team, named Preparedness, will also work to counter other dangers such as autonomous duplication and adaptation, cybersecurity, biological, and radioactive attacks, as well as targeted persuasion. In 2022, OpenAI researchers co-authored a study which flagged the risks of using large language models for warfare. An OpenAI spokesperson previously said: “We believe that frontier AI models, which will exceed the capabilities currently present in the most advanced existing models, have the potential to benefit all of humanity. But they also pose increasingly severe risks” Research company GlobalData estimates the total AI market will be worth $383.3bn in 2030, implying a 21% compound annual growth rate between 2022 and 2030.

Uncover your next opportunity with expert reports

Steer your business strategy with key data and insights from our latest market research reports and company profiles. Not ready to buy? Start small by downloading a sample report first.

Newsletters by sectors

close

Sign up to the newsletter: In Brief

Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

Thank you for subscribing

View all newsletters from across the GlobalData Media network.

close