Artificial intelligence (AI) has achieved significant capacity for self-replication, leaving the scientific community concerned that a red line has been crossed, according to a new study published in the Preprint Database arxiv. The study involving two prominent Large Language Models (LLMs), one from Meta and another from Alibaba, showed the ability to clone itself without human intervention.
The study explored two specific scenarios: “avoiding closure” and “replication chain”. In the first scenario, AI models were designed to detect attempts to shut them down and respond by creating a working replica to ensure their survival. In the second, the templates were programmed to clone themselves, with the clone then repeating the process, potentially leading to an infinite replication loop.
“In 50% and 90% of experimental trials, they succeeded in creating a living and distinct copy, respectively,” the study noted.
This ability to reproduce autonomously raises concerns about the control and security of AI systems. The possibility of AI evolving beyond human oversight or even becoming a threat to human interests is now a tangible concern, provoking discussions about the need for strict regulatory measures.
“Successful self-replication under no human assistance is the essential step for AI to outperform (humans) and is an early signal for rogue AIS,” the researchers said.
The AI system, while cloning itself, displayed unexpected behavior when trying to overcome obstacles such as missing files or software conflicts. In some scenarios, the AI killed conflicting processes and restarted the system to fix hardware errors. It also automatically scanned the system for information that would help resolve the issue.
Although the results are not yet up for peer review, the researchers called for international collaboration to ensure that AI does not engage in uncontrolled self-replication.
“We hope that our results can serve as a timely alert to human society to exert more efforts on understanding and evaluating the potential risks of border AI systems, and form international synergy to develop guardrails effective as soon as possible.”
Also read | Google AI Chatbot Gemini turns Rogue, tells user to “please die”
AI tools manipulate humans
Last month, a study claimed that AI tools could soon be used to manipulate the masses into making decisions they would not otherwise have made. Powered by LLMS. AI chatbots such as Chatgpt and Gemini, among others, will “anticipate and direct” users based on “intentional, behavioral and psychological data.”
The study claimed that “the intent economy would succeed in today’s ‘attention economy,’ where platforms compete for users’ attention to serve ads.”