Internet security activists urged the UK communications dog to limit the use of artificial intelligence in crucial risk assessments after a report that Mark Zuckerberg meta planned to automate checks.
Ofcom said that it “envisaged concerns” raised by the letter of activists after A report last month This is up to 90% of all risk assessments at the owner of Facebook, Instagram and WhatsApp would soon be carried out by the AI.
Social media platforms are required under the UK online security law to assess how damage may take place on their services and how they plan to mitigate these potential damage – by particularly emphasizing the protection of children’s users and preventing it from revealing illegal content. The risk assessment process is considered a key aspect of the law.
In a letter to the Director General of OFCOM, Melanie Dawes, organizations such as the Molly Rose Foundation, the NSPCC and the Watch Internet Foundation described the prospect of AI risk assessments as a “retrograde and very alarming step”.
They said: “We urge you to affirm publicly that risk assessments will not normally be considered” appropriate and sufficient “, the standard required by … the act, where these were entirely or mainly produced by automation.”
The letter has also urged the guard dog to “contest any hypothesis that platforms can choose to water their risk assessment processes”.
A spokesperson for OFCOM said: “We have been clear that the services should tell us who has finished, examined and approved their risk assessment. We are considering the concerns raised in this letter and we are responding in due course.”
After promoting the newsletter
Meta said that the letter had deliberately destroyed the company’s approach to security and that it was engaged according to high standards and comply with regulations.
“We are not using AI to make risk decisions,” said a Meta spokesperson. “On the contrary, our experts have built a tool that helps teams to identify when legal and political requirements apply to specific products. We use technology, supervised by humans, to improve our ability to manage harmful content and our technological progress has considerably improved security results. ”
The Molly Rose Foundation organized the letter after the American broadcaster NPR reported last month that the Meta algorithms updates and new security characteristics would mainly be approved by an AI system and are no longer examined by staff members.
According to an old Meta Executive which spoke to NPR anonymously, the change will allow the company to launch updates and application features on Facebook, Instagram And WhatsApp faster, but will create “higher risks” for users, because potential problems are less likely to be possible before a new product is published to the public.
NPR also indicated that Meta was planning to automate journals for sensitive areas, including risks for young people and monitoring the propagation of lies.