As artificial intelligence technology makes it easier to scam people, create fake images and videos, and even ruin reputations, a political debate revolves around who should be penalized for such behavior: the developer of the technology or the person who deployed it.
Drawing these boundaries is complicated, especially when it comes to minors.
At a Senate hearing on November 19 chaired by American John Hickenlooper of Colorado, Hany Farid, professor at the School of Information at the University of California at Berkeley, used the example of a boy from 12 year old capable of creating fake, non-consensual nude images of classmates.
Even if the boy must be held accountable, the government must begin imposing tough sanctions against the AI company that developed the technology that put the tools in the child’s hands, he said. he declared.
Farid was adamant that punishing teenagers for misbehavior would not have a major national impact, but harsh and costly sanctions on AI companies developing this technology would.
Alvin McBorrough, founder and managing partner of OGx, a Denver consulting firm specializing in technology and analytics, said the developer builds the AI tools while the deployer uses them.
“The onus is on the developer and deployer to put reliable safeguards in place,” McBorrough said.
As the AI industry grows at a record pace, those pushing for regulation say there is no real accountability as state and federal governments are slow to adopt laws to hold developers and those who deploy technology accountable.
They note that realistic videos and images are increasingly used to victimize adolescents and adults. Here’s a problem they’ve identified: When a high school student creates a fake porn video or image of other classmates, they face little to no punishment – schools say there are no no policies in the district, nor are there any state or federal laws to regulate this. behavior.
At the same time, scams targeting seniors and other consumers have become increasingly prevalent, thanks in part to AI technology.
Last year, consumers lost $10 billion to scams and fraud, a significant increase from $3.5 billion in 2020, according to the Federal Trades Commission.
Justin Brookman, director of technology policy at Consumer Reports, said advances in AI technology have made fraud easier and less costly: what once cost a scammer about $4 to create a fake image or Credible video now costs about 12 cents.
Discriminatory practices will also continue to grow, Farid said, saying big companies are still using faulty algorithms created in the past – and just updating themselves with advanced AI technology, instead of creating an entirely new one. coding.
U.S. Rep. Brittaney Pettersen said the areas most affected by what she and others have described as “bias” in technology are the housing and financial sectors.
McBorrough said concerns about “bad” technology are reasonable if companies continue to rely on technology that is already flawed and discriminates against particular demographics, which is why he applauded the action taken by the Colorado Legislature in 2024.
The Colorado Legislature has passed Senate Bill 205, which supporters say would establish a framework to reduce the risk of “undesirable bias” in AI-based decision-making. The bill is expected to come into force in February 2026.
Opponents, meanwhile, argued that innovation should be “encouraged, not stifled”, and that policy should strike a balance between consumers and promoting technological progress. They added that the legislation pushes the provisions “this may not be feasible or effective“.
Meanwhile, the Attorney General’s Office was tasked with implementing the law by creating audit policies and identifying high-risk AI practices, including creating a task force to address the project’s flaws bill during next year’s legislative session.
McBorrough, who told Colorado Politics he works with major AI development companies, such as Google, said the industry is committed to protecting the public from “bad actors” who drive AI technology. AI in the wrong direction.
“The intention is good and some developers are careful in making decisions and carefully planning what we do,” he said.
On November 19, Farid warned Hickenlooper and other members of Congress that unless laws are passed that hit big businesses hard, the “bad” side of AI will only get worse.
Farid said the solution focused on money.
If big tech companies develop AI technology that enables scams, non-consensual images, and other misuses, people will continue to misuse it.
Farid said that to prevent big tech companies, such as Google, Microsoft and others developing AI, from developing such technologies, they must face fines large enough to change their behavior.