Briefly
- Australia’s eSafety Commissioner flagged a spike in complaints about Elon Musk’s Grok chatbot creating non-consensual sexual pictures, with reviews doubling since late 2025.
- Some complaints contain potential baby sexual exploitation materials, whereas others relate to adults subjected to image-based abuse.
- The considerations come as governments worldwide examine Grok’s lax content material moderation, with the EU declaring the chatbot’s “Spicy Mode” unlawful.
Australia’s impartial on-line security regulator issued a warning Thursday in regards to the rising use of Grok to generate sexualized pictures with out consent, revealing her workplace has seen complaints in regards to the AI chatbot double in latest months.
The nation’s eSafety Commissioner Julie Inman Grant stated some reviews contain potential baby sexual exploitation materials, whereas others relate to adults subjected to image-based abuse.
“I am deeply involved in regards to the growing use of generative AI to sexualise or exploit individuals, significantly the place kids are concerned,” Grant posted on LinkedIn on Thursday.
The feedback come amid mounting worldwide backlash in opposition to Grok, a chatbot constructed by billionaire Elon Musk’s AI startup xAI, which might be prompted straight on X to change customers’ pictures.
Grant warned that AI’s skill to generate “hyper-realistic content material” is making it simpler for dangerous actors to create artificial abuse and more durable for regulators, regulation enforcement, and child-safety teams to reply.
In contrast to opponents corresponding to ChatGPT, Musk’s xAI has positioned Grok as an “edgy” various that generates content material different AI fashions refuse to supply. Final August, it launched “Spicy Mode” particularly to create express content material.
Grant warned that Australia’s enforceable business codes require on-line providers to implement safeguards in opposition to baby sexual exploitation materials, whether or not AI-generated or not.
Final 12 months, eSafety took enforcement motion in opposition to widely-used “nudify” providers, forcing their withdrawal from Australia, she added.
“We have now entered an age the place firms should guarantee generative AI merchandise have acceptable safeguards and guardrails in-built throughout each stage of the product lifecycle,” Grant stated, noting that eSafety will “examine and take acceptable motion” utilizing its full vary of regulatory instruments.
Deepfakes on the rise
In September, Grant secured Australia’s first deepfake penalty when the federal court docket fined Gold Coast man Anthony Rotondo $212,000 (A$343,500) for posting deepfake pornography of outstanding Australian ladies.
The eSafety Commissioner took Rotondo to court docket in 2023 after he defied elimination notices, saying they “meant nothing to him” as he was not an Australian resident, then emailing the pictures to 50 addresses, together with Grant’s workplace and media shops, in line with an ABC Information report.
Australian lawmakers are pushing for stronger protections in opposition to non-consensual deepfakes past current legal guidelines.
Unbiased Senator David Pocock launched the On-line Security and Different Laws Modification (My Face, My Rights) Invoice 2025 in November, which might enable people sharing non-consensual deepfakes to be fined $102,000 (A$165,000) up-front, with firms dealing with penalties as much as $510,000 (A$825,000) for non-compliance with elimination notices.
“We at the moment are dwelling in a world the place more and more anybody can create a deepfake and use it nonetheless they need,” Pocock stated in a assertion, criticizing the federal government for being “asleep on the wheel” on AI protections.
Usually Clever E-newsletter
A weekly AI journey narrated by Gen, a generative AI mannequin.

