Briefly
- UNICEF’s analysis estimates 1.2 million kids had photos manipulated into sexual deepfakes final yr throughout 11 surveyed international locations.
- Regulators have stepped up motion in opposition to AI platforms, with probes, bans, and legal investigations tied to alleged unlawful content material era.
- The company urged tighter legal guidelines and “safety-by-design” guidelines for AI builders, together with obligatory child-rights influence checks.
UNICEF issued an pressing name Wednesday for governments to criminalize AI-generated youngster sexual abuse materials, citing alarming proof that no less than 1.2 million kids worldwide had their photos manipulated into sexually specific deepfakes up to now yr.
The figures, revealed in Disrupting Hurt Section 2, a analysis venture led by UNICEF’s Workplace of Technique and Proof Innocenti, ECPAT Worldwide, and INTERPOL, present in some nations the determine represents one in 25 kids, the equal of 1 youngster in a typical classroom, in keeping with a Wednesday assertion and accompanying problem temporary.
The analysis, primarily based on a nationally consultant family survey of roughly 11,000 kids throughout 11 international locations, highlights how perpetrators can now create sensible sexual photos of a kid with out their involvement or consciousness.
In some examine international locations, as much as two-thirds stated they fear AI could possibly be used to create pretend sexual photos or movies of them, although ranges of concern differ broadly between international locations, in keeping with the information.
“We have to be clear. Sexualised photos of youngsters generated or manipulated utilizing AI instruments are youngster sexual abuse materials (CSAM),” UNICEF stated. “Deepfake abuse is abuse, and there’s nothing pretend in regards to the hurt it causes.”
The decision positive aspects urgency as French authorities raided X’s Paris places of work on Tuesday as a part of a legal investigation into alleged youngster pornography linked to the platform’s AI chatbot Grok, with prosecutors summoning Elon Musk and several other executives for questioning.
A Middle for Countering Digital Hate report launched final month estimated Grok produced 23,338 sexualized photos of youngsters over an 11-day interval between December 29 and January 9.
The difficulty temporary launched alongside the assertion notes these developments mark “a profound escalation of the dangers kids face within the digital surroundings,” the place a toddler can have their proper to safety violated “with out ever sending a message and even figuring out it has occurred.”
The UK’s Web Watch Basis flagged almost 14,000 suspected AI-generated photos on a single dark-web discussion board in a single month, a few third confirmed as legal, whereas South Korean authorities reported a tenfold surge in AI and deepfake-linked sexual offenses between 2022 and 2024, with most suspects recognized as youngsters.
The group urgently known as on all governments to broaden definitions of kid sexual abuse materials to incorporate AI-generated content material and criminalize its creation, procurement, possession, and distribution.
UNICEF additionally demanded that AI builders implement safety-by-design approaches and that digital corporations stop the circulation of such materials.
The temporary requires states to require corporations to conduct youngster rights due diligence, notably youngster rights influence assessments, and for each actor within the AI worth chain to embed security measures, together with pre-release security testing for open-source fashions.
“The hurt from deepfake abuse is actual and pressing,” UNICEF warned. “Youngsters can not look forward to the legislation to catch up.”
The European Fee launched a proper investigation final month into whether or not X violated EU digital guidelines by failing to stop Grok from producing unlawful content material, whereas the Philippines, Indonesia, and Malaysia have banned Grok, and regulators within the UK and Australia have additionally opened investigations.
Each day Debrief Publication
Begin day by day with the highest information tales proper now, plus unique options, a podcast, movies and extra.

