“This work represents a significant step forward in strengthening our information advantage as we combat sophisticated disinformation campaigns and synthetic media threats,” says Bustamante. Hive was chosen from a group of 36 companies to test its deepfake detection and attribution technology with the DOD. The contract could enable the department to detect and counter AI deception on a large scale.
Defending against deepfakes is “existential,” says Kevin Guo, CEO of Hive AI. “This is the evolution of cyberwarfare.”
Hive’s technology was trained on a large amount of content, some AI-generated and some not. It detects signals and patterns in AI-generated content that are invisible to the human eye but can be detected by an AI model.
“It turns out that every image generated by one of these generators contains this kind of pattern if you know where to look for it,” Guo says. The Hive team constantly tracks new models and updates their technology accordingly.
The tools and methodologies developed under this initiative have the potential to be adapted for wider use, not only to address defense-specific challenges, but also to protect civilian institutions against disinformation, fraud and deception, the Defense Ministry said in a statement.
Hive’s technology offers industry-leading performance in detecting AI-generated content, says Siwei Lyu, professor of computer science and engineering at the University at Buffalo. He did not participate in Hive’s work but tested its detection tools.
Ben Zhao, a professor at the University of Chicago who also conducted independent research evaluated Hive AI’s deepfake technology agrees but points out that it is far from foolproof.
“Hive is certainly better than most commercial entities and some of the search techniques we’ve tried, but we’ve also shown that it’s not at all difficult to circumvent,” says Zhao. The team discovered that adversaries could tamper with images in a way that bypassed Hive’s detection.