One classroom, one victim: UNICEF says AI is being used to sexualise children faster than laws can stop it.
UNICEF is sounding a loud warning over the exploding use of AI to create sexualised images of children, saying weak laws and patchy safeguards are leaving millions exposed.
New research across 11 countries shows at least 1.2 million children say their images were turned into sexually explicit deepfakes in just the past year. In some countries, that’s one in every 25 children — roughly one child in a typical classroom.
According to UNICEF, AI tools are increasingly used for “nudification”, digitally stripping or altering photos to create fake nude or sexual images. The agency stressed that deepfake abuse is real abuse, insisting that AI-generated sexual images of children are child sexual abuse material (CSAM) — full stop.
Worryingly, up to two-thirds of children in some countries say they fear AI could be used to fake sexual images or videos of them. UNICEF says even when no real child is identified, such content normalises abuse, fuels demand and complicates law enforcement efforts.
While praising AI developers building safety guardrails, UNICEF warned that too many platforms still lack strong protections, especially where generative AI is embedded into social media and content spreads fast.
To tackle the threat, UNICEF is urging governments to criminalise AI-generated child sexual abuse content, expand legal definitions of CSAM, and push tech companies to prevent circulation — not just delete content after the damage is done.
Bottom line: there’s nothing fake about the harm.


Leave a comment