By Khari JohnsonCalm

This story was initially published by Calm. Register For their newsletters.
California uses algorithms to predict whether imprisoned people will commit crimes. He used a predictive technology to refuse 600,000 people in unemployment. However, state administrators have concluded that no agency uses a single high risk agency of automated decision -making technology.
It is according to a Report the California Department of Technology supplied to calm After examining nearly 200 state entities. Agencies are required by Legislation signed in 2023 To report each year if they use high -risk automated systems that can make people’s life decisions. “High risk” means any system that can help or replace human decision -makers with regard to meetings with the criminal justice system or if people have access to housing, education, employment, credit and health care.
The California Department of Technology does not know which algorithms State agencies use today and have only pointed out what agencies told them, told Calmatters, director of state technologies, Jonathan Porat. When asked if the algorithms of the employment service or correctional services are eligible, Porat said that it was up to the agencies to interpret the law.
“I do not know that what they relate to us, because even if they have the contract … We do not know how or if they use it, so we count on these departments to indicate this information with precision,” he said.
“We do not know how or if they use it … We count on these departments to report it with precision.”
Jonathan Porat, Director of Technology, California Department of Technology
The agencies, which were to submit responses in late August 2024, said high -risk automated systems used in the past year. If this had found high -risk systems, they had to report the type of personal data that these systems use to make decisions concerning the people and the measures they make to reduce the probability of this use resulting in discrimination, an bias or an unjust result.
Certain automated systems used by state agencies raise questions about the definition of risk. California Department of Corrections and Rehabilitation, for example, attributes recidivism scores to the vast majority of prisoners to determine their needs when they enter and leave the prison. An algorithm usesCompass, has a Documented story of racial biasBut the correctional services service has declared to the Ministry of Technology that it does not use any high -risk automation.
California Employment Development Department has also noted any use of high -risk automated systems. Between the Christmas and New Year holidays in 2020, the ministry has interrupted unemployment benefits for 1.1 million people after the agency used Thomson Reuters AI tools to give candidates for unemployment candidates. Some 600,000 of these complaints were then confirmed as legitimate, According to a state analysis.
The Employment Department refuses to say if this algorithm is used today, providing a written declaration that its fraud detection processes are confidential “to ensure that we do not provide criminals with information that could help criminal activity”.
“ They speak on both sides of their mouth here ”
The report also seems to be out of synchronized with a trio of analyzes carried out in the past year by the staff of the California legislature, who indicated that the State should spend hundreds of millions of dollars or more each year to monitor the use by the government of high -risk algorithms.
Last year, Rebecca Bauer-Kahan Rebecca proposed a bill This would have forced state agencies to carry out risk assessments of algorithms that can make a “substantial decision” concerning people’s lives – like the types of algorithms in the new report of the Department of Technology.
Three different legislative analyzes From its concept by the staff of the credit committee concluded that it would be an expensive company, which would cost hundreds of millions of dollars a year, with continuous costs in billions of dollars.
If there are no high-risk automated systems in the California government, how can it cost millions or billions of dollars to assess them?
This is a familiar source with the analyzes wondered. The person, who asked for anonymity for the sake of potential professional consequences, said he had little daylight between the definition of a high-risk automated system in the report of the Ministry of Technology and a substantial decision in Bauer-Kahan’s bill. They think someone is lying.
“There is no way that these two things can be true,” they said. “They speak on both sides of their mouths here.”
The authors of the legislative analyzes did not respond to several requests for comments. And porat in the technology department was also at a loss. “I cannot fully explain that,” he told Calmatters. “It is possible that a department and an agency in partnership with a group or even within the State envisages something for the future which did not respond to the definition which was presented in the requirements last year.”
The legislation that required high -risk automation reports specifically mentions systems that produce scores. Given the omnipresence of tools that attribute risk scores, the result of the inventory of the Ministry of Technology is surprising, said Deirdre Mulligan, director of the Berkeley Center for Law & Technology, who helped develop an AI policy for the Biden administration.
Mulligan said it is essential that the government has set up rules to guarantee that automation does not deprive people of their rights. It is appropriate that analyzes that potentially predict billions of test costs can report future plans to use the automation of risks high by state agencies, which is now a timely moment to ensure that such protections are in place.
Samantha Gordon, program manager of the advocacy group, Techoquity, who called for more transparency on how California uses AI, said that state agencies must extend their definition of high-risk systems if he does not understand algorithms such as that of EDD in 2020, which can refuse people of unemployment and to impregnate their capacity to keep a roof above. their families, to unemployment services.
“I think that if you have asked an everyday Californian if losing his unemployment benefits at Christmas time when they have no work caused a real risk for their livelihood, I bet they would say yes,” she said.
A high -risk generator in the future of the State
The high -risk automated decision -making report comes at a time when state agencies deploy a series of potentially risky AI applications. In recent weeks, Governor Gavin Newsom has announced that state agencies will adopt AI tools to do things like speak with the Californians of forest fires,, Manage traffic safety,, accelerate the reconstruction process after forest fires in Los Angeles, and inform state employees who Help companies table their taxes.
Legislators want to follow this type of systems in part due to the potential they could make of mistakes. A 2023 State report on risks and adoption opportunities of the generative AI government Practical that it can produce convincing but inaccurate results, provides different responses to the same invitation and may suffer from a collapse of the model, when the predictions move away from precise results. The generative AI also includes the risk of automation biases, when people become too confident and depend on automated decision -making, according to the report.
At the end of 2023, Newsom ordered the technology service to compile a different report, an inventory of high -risk use of generative AI by state agencies of the executive branch. Calmatters asked for a copy of this document, but the Ministry of Technology refused to share it; Information Security Director Vitaliy Panych called on unnecessary security risk.
What AI deserves a high -risk label is an ongoing debate and part of the legal regime emerging in democratic countries around the world. The technology that wins this label is often subject to more tests before deployment and continuous monitoring. In the European Union, for example, the AI Act Labels as high -risk models used in critical infrastructure operations as well as those that decide access to education, employment and public benefits. Likewise, the Declaration of the rights of the AI compiled during the Biden administration Defined as a high -risk AI that can make decisions about your job, health care and accommodation.
California legislature is currently Considering dozens of bills to regulate AI In the coming months, if Congress does not place a moratorium on state AI regulations for a decade. A report commissioned by the governor on how to balance innovation and railings for AI is scheduled for this summer.
The law which requires the report on high -risk automated systems has notable exceptions, including all judicials and state license organizations such as California Bar Association, which, which, which, which, which, which, which, which, which, which, which, which, triggered the controversy last month After using AI to write questions for its high challenges exam. The law also does not require the conformity of local governments, which Often use AI in criminal or police environmentsand school districts. Or Teachers use AI to note the papers. The health care market covered California, which markings revealed Share the personal information of Californians with LinkedInis also Use of generative AI But this entity is not required to run for the Ministry of Technology.
This article was Originally published on Calmatters and was republished under the Assignment of creative-noderivatives-noderivatives license.