“We believe that a democratic vision of AI is essential to unlocking its full potential and ensuring that its benefits are widely shared,” OpenAI wrote, echoing similar language in the White House memo. “We believe that democracies should continue to take the lead in AI development, guided by values such as freedom, fairness and respect for human rights. »
It proposed a number of ways OpenAI could help achieve this goal, including efforts to “streamline translation and summarization tasks, as well as study and mitigate harm to civilians”, while prohibiting its technology is used to “harm people, destroy property or develop weapons”. .” Above all, it was a message from OpenAI that it was OK with national security work.
The new policies emphasize “flexibility and compliance with the law,” says Heidy Khlaaf, chief AI scientist at the AI Now Institute and a security researcher who has written a book on security. paper with OpenAI in 2022 on the possible dangers of its technology in particularly military contexts. The company’s turn “ultimately signals an acceptance to conduct military and war-related activities as the Pentagon and the U.S. military see fit,” she said.
Amazon, Google and Microsoft, OpenAI’s partner and investor, have competed for years for the Pentagon’s cloud computing contracts. These companies have learned that working in defense can be incredibly lucrative, and OpenAI’s pivot to meeting the company’s expectations is 5 billion dollars in losses and would explore new sources of income such as advertisementcould signal that it wants a piece of these contracts. Big Tech’s relationship with the military also no longer draws the outrage and scrutiny it once did. But OpenAI is not a cloud provider, and the technology it develops is expected to do much more than just store and retrieve data. With this new partnership, OpenAI promises to help sort data on the battlefield, provide threat intelligence, and help make the wartime decision-making process faster and more effective.
OpenAI’s statements on national security may raise more questions than they answer. The company wants to mitigate harm to civilians, but for which civilians? Doesn’t contributing AI models to a program to destroy drones count as developing weapons that could harm humans?
“Defensive weapons are still weapons,” says Khlaaf. They “can often be positioned offensively depending on the location and purpose of a mission.”
Beyond these questions, working in defense means that the world’s largest AI company, which has incredible influence in the industry and which has long pontificated on how to manage AI responsibly, will now work in a defense technology industry that plays the role of a completely different set of rules. In this system, when your customer is the US military, technology companies don’t decide how their products are used.