Tech giants agree to child safety principles around generative AI

Amazon, Google, Meta, Microsoft and OpenAI have signed up to the safety commitments, which are being led by child online safety organisations.
Top companies have pledged to develop, deploy and maintain generative AI models with child safety at the centre (Yui Mok/PA)
PA Wire
Martyn Landi23 April 2024

Some of the world’s biggest tech and AI firms have agreed to follow new online safety principles designed to combat the creation and spread of AI-generated child sexual abuse material.

Amazon, Google, Meta, Microsoft and ChatGPT creator OpenAI are among the companies to have signed up to the principles, called Safety By Design.

The commitments have been drawn up by child online safety group Thorn and fellow nonprofit All Tech is Human and sees the firms pledge to develop, deploy and maintain generative AI models with child safety at the centre in an effort to prevent the misuse of the technology in child exploitation.

The principles see firms commit to develop, build and train AI models that proactively address child safety risks, for example by ensuring training data does not include child sexual abuse material, as well as maintaining safety after their release by staying alert and responding to child safety risks that emerge.

Generative AI tools such as ChatGPT have become the key area of development within the technology sector over the last 18 months, with an array of AI models and content generation tools being developed and launched by the major firms.

The more companies that join these commitments, the better that we can ensure this powerful technology is rooted in safety while the window of opportunity is still open for action

Dr Rebecca Portnoff, vice president of data science at Thorn

The rapid rise has seen social media and other platforms flooded with AI-generated words, images and videos, with many online safety groups warning of the implications of more fake and misleading content being seen and spread online.

Earlier this year, children’s charity the NSPCC warned that young people were already contacting Childline about AI-generated child sexual abuse material.

Speaking about the new agreed principles, Dr Rebecca Portnoff, vice president of data science at Thorn, said: “We’re at a crossroads with generative AI, which holds both promise and risk in our work to defend children from sexual abuse.

“I’ve seen first-hand how machine learning and AI accelerates victim identification and child sexual abuse material detection. But these same technologies are already, today, being misused to harm children.

“That this diverse group of leading AI companies has committed to child safety principles should be a rallying cry for the rest of the tech community to prioritise child safety through Safety by Design.

“This is our opportunity to adopt standards that prevent and mitigate downstream misuse of these technologies to further sexual harm against children. The more companies that join these commitments, the better that we can ensure this powerful technology is rooted in safety while the window of opportunity is still open for action.”

Create a FREE account to continue reading

eros

Registration is a free and easy way to support our journalism.

Join our community where you can: comment on stories; sign up to newsletters; enter competitions and access content on our app.

Your email address

Must be at least 6 characters, include an upper and lower case character and a number

You must be at least 18 years old to create an account

* Required fields

Already have an account? SIGN IN

By clicking Create Account you confirm that your data has been entered correctly and you have read and agree to our Terms of use , Cookie policy and Privacy policy .

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged in