Advertisement
News

Tech Companies Promise to Try to Do Something About All the AI CSAM They’re Enabling

The biggest tech companies in the world pledge to do something about the harmful AI images they are actively enabling.
Tech Companies Promise to Try to Do Something About All the AI CSAM They’re Enabling
An image from Thorn's announcement showing how some of these images are made. Image: Thorn

Last week, Demi Moore’s and Ashton Kutcher’s anti-human trafficking and sexual exploitation of children organization, Thorn, announced it had partnered with the responsible tech organization All Tech Is Human, and all of the biggest tech and AI companies in the world, to publicly commit to “safety by design” principles to “guard against the creation and spread of AI-generated child sexual abuse material (AIG-CSAM).”

Amazon, Anthropic, Google, Meta, Microsoft, Mistral AI, OpenAI, Hugging Face, and Stability AI are all part of the collaboration, which at this point amounts to a whitepaper that boils down to “responsibly” training and hosting AI models, proactively guarding against CSAM, and pledges from all of these companies to hold to these principles and do their best to minimize harm. 

It is a grand gesture to stop one of the ugliest outcomes of the rapid development and deployment of generative AI tools and a woefully inadequate response to the crisis this technology has created. Ultimately, the initiative allows tech companies to say that they are doing something to address the problem while transparently betraying how they will pursue new revenue streams no matter the human cost. 

Advertisement