Last Thursday, Senators Elizabeth Warren and Eric Schmitt introduced a bill aimed at fueling more competition for Pentagon contracts awarded in the areas of AI and cloud computing. Amazon, Microsoft, Google and Oracle currently dominate these contracts. “The way the big guys get bigger in AI is to suck up everyone else’s data and use it to train and develop their own systems,” Warren told the Washington Post.
The new Invoice “would require a competitive award process” for contracts, which would prohibit the use of “no-bid” awards by the Pentagon to companies for cloud services or basic AI models. (Lawmakers’ decision came a day after OpenAI announced its technology would be deployed on the battlefield for the first time in partnership with Anduril, completing a one year reversal of its policy against collaboration with the army.)
As Big Tech faces antitrust investigations including ongoing trial against Google over its dominance in search, as well as a new investigation opened into Microsoft– regulators also accuse AI companies of simply lying.
The Federal Trade Commission took action against smart camera company IntelliVision on Tuesday, saying the company was making false allegations on its facial recognition technology. IntelliVision has touted its AI models, which are used in home and commercial security camera systems, as operating without gender or racial bias and being trained on millions of images, two claims the FTC considers false. (The company could not substantiate this bias claim, and the system was trained on only 100,000 images, according to the FTC.)
A week earlier, the FTC had done the same thing complaints of deception against security giant Evolv, which sells AI-based security analytics products to stadiums, primary and secondary schools, and hospitals. Evolv touts its systems as offering better protection than simple metal detectors, saying they use AI to accurately detect guns, knives and other threats while ignoring innocuous objects. The FTC alleges that Evolv inflated its accuracy claims and that its systems failed in consecutive cases, such as a 2022 incident when they failed to detect a seven-inch knife that was ultimately used to stab a student.
This is in addition to complaints filed by the FTC in 2017. September against a number of AI companies, including one that sold a tool to generate fake product reviews and another that sold “AI lawyer” services.