Earlier this month, the company that brings us ChatGPT announcement its partnership with Californian arms company Anduril to produce AI weapons. The OpenAI-Anduril system, tested in California at the end of November, allows data sharing between external actors for decision-making on the battlefield. This fits perfectly with the US military and OpenAI’s plans to standardize the use of AI on the battlefield.
Costa Mesa-based Anduril makes AI-powered drones, missiles and radar systems, including surveillance towers, Sentry systemscurrently used on U.S. military bases around the world as well as on the U.S.-Mexico border and on the British coasts will detect migrants on boats. On December 3, they received a three-year contract with the Pentagon for a system that gives soldiers AI solutions during attacks.
In January, OpenAI removed a direct ban in their usage policy on “activities with a high risk of physical harm”, which specifically included “military and war” and “weapons development”. Less than a week later, the company announced a partnership with the Pentagon on cybersecurity.
While they may have lifted the ban on weapons manufacturing, OpenAI’s entry into the war industry is in stark contradiction to its goals. own charter. Their own proclamation to build a “safe and beneficial AGI (Artificial Generative Intelligence)” that will not “harm humanity” is laughable when they use technology to kill. ChatGPT could eventually, and probably soon will, write code for an automated weapon, analyze information about bombings, or facilitate invasions and occupations.
We should all be frightened by this use of AI for death and destruction. But this is nothing new. Israel and the United States have been testing and using AI in Palestine for years. In fact, Hebron has been dubbed a “smart city” as the occupation imposes its tyranny through a perforation of motion and heat sensors, facial recognition technology and video surveillance. At the center of this oppressive surveillance is the Blue Wolf Systeman AI tool that scans the faces of Palestinians when they are photographed by Israeli occupation soldiers, and references a biometric database in which information about them is stored. When entering the photo into the system, each person is color-coded based on their perceived “threat level” to dictate whether the trooper should allow them through or arrest them. IOF soldiers are rewarded with prizes for taking the most photos, which they have called “Facebook for Palestinians”, according to the revelations of the Washington Post in 2021.
OpenAI’s warfare technology comes as the Biden administration pushes for the United States to use the technology to “meet its national security objectives.” It was actually part of the title of a White House memorandum published in October this year, calling for the rapid development of artificial intelligence “especially in the context of national security systems.” Although it does not explicitly name China, it is clear that the perception of an “AI arms race” with China is also a central motivation of the Biden administration for such a call. It is not only about weapons of war, but also about the race to develop technology in the broad sense. Earlier this month, the The United States banned the export of HBM chips in China, a critical component of AI and high-level graphics processing units (GPUs). Former Google CEO Eric Schmidt has warned that China is two to three years ahead of the United States in AI, a major shift from his statements earlier this year, in which he noted that the United States was ahead of China. When he says there is a “threat escalation matrix” When there are developments in the field of AI, it reveals that the United States sees the technology only as a tool of war and a means to assert its hegemony. AI is the latest in America’s relentless – and dangerous – provocations and fear-mongering against China, which it cannot bear to see progress.
In response to the White House memorandum, OpenAI published a declaration of its own where he reaffirmed many of the White House lines on “democratic values” and “national security.” But what’s democratic about a company developing technology to better target and bomb people? Who is secured by the collection of information aimed at better determining war technology? This surely reveals the company’s alignment with the Biden administration’s anti-China rhetoric and imperialist justifications. As a company that has surely pushed AGI systems into mainstream society, it is deeply alarming that they abandoned all codes and immediately joined forces with the Pentagon. While it’s no surprise that companies like Palantir or even Anduril itself are using AI for warfare, from companies like OpenAI – a non-profit supposedly with a mission – we should expect better.
AI is used to rationalize murders. On the US-Mexico border, in Palestine, and in US imperial outposts around the world. While AI systems seem innocently integrated into our daily lives, from search engines to music streaming sites, we must forget that these same companies are using the same technology in deadly ways. Although ChatGPT can give you ten ways to protest, it’s probably trained to kill, better and faster.
Of the war machine to our planetAI in the hands of the US imperialists only means more profits for them and more devastation and destruction for all of us.