Briefly
- X restricted Grok picture technology and modifying options and restricted entry to paid subscribers.
- The modifications adopted experiences of non-consensual sexualized AI photos, together with these involving minors.
- Regulators in California, Europe, and Australia are investigating xAI and Grok over potential violations.
X stated it’s proscribing picture technology and modifying options tied to Grok, limiting entry to paid customers after the chatbot was used to create non-consensual sexualized photos of actual individuals, together with minors.
In an replace posted by the X Security account on Wednesday, the corporate added technical restrictions to restrict how customers can edit photos of actual individuals by way of Grok.
The transfer adopted experiences that the AI generated sexualized photos in response to easy prompts, together with requests to position individuals in bikinis. In lots of circumstances, customers tagged Grok instantly underneath pictures posted on X, inflicting the AI to generate edited photos that appeared publicly in the identical threads.
“Now we have applied technological measures to forestall the Grok account from permitting the modifying of photos of actual individuals in revealing clothes similar to bikinis,” the corporate stated, referencing the viral pattern of asking Grok to place individuals in bikinis.
The corporate additionally stated picture creation and picture modifying by way of the Grok account on X are actually out there solely to paid subscribers, a change it stated is meant to enhance accountability and forestall misuse of Grok’s picture instruments that violate the regulation or X’s insurance policies. The corporate additionally instituted location-based restrictions.
“We now geoblock the power of all customers to generate photos of actual individuals in bikinis, underwear, and related apparel by way of the Grok account and in Grok in X in these jurisdictions the place it’s unlawful.”
Regardless of the modifications, nevertheless, Grok continues to permit customers to take away or alter clothes from pictures uploaded on to the AI, based on Decrypt’s testing and consumer experiences following the announcement.
In some circumstances, Grok acknowledged “lapses in safeguards” after producing photos of women aged 12 to 16 in minimal clothes, conduct prohibited underneath the corporate’s personal insurance policies. The continued availability of these capabilities has drawn scrutiny from advocacy teams.
“If experiences that Grok created sexualized photos—significantly of youngsters—are true, Texas regulation might have been damaged,” Adrian Shelley, Texas director of Public Citizen, stated in a press release. “Texas authorities do not need to look far to research these allegations. X is headquartered within the Austin space, and the state has a transparent accountability to find out whether or not its legal guidelines have been damaged and, if that’s the case, what penalties are warranted.”
Public Citizen beforehand known as for the U.S. authorities to tug Grok from its record of acceptable AI fashions over considerations of racism exhibited by the chatbot.
International backlash
International policymakers have additionally elevated scrutiny of Grok, resulting in a number of open investigations.
The European Fee stated X and xAI may face enforcement underneath the Digital Providers Act if safeguards on Grok remained insufficient. On the similar time, Australia’s eSafety Commissioner stated complaints involving Grok and non-consensual AI-generated sexual photos have doubled since late 2025. The regulator stated AI picture instruments able to producing real looking edits complicate enforcement and sufferer safety.
Within the UK, regulators with Ofcom opened an investigation into X underneath the On-line Security Act stemming from Grok getting used to generate unlawful sexualized deepfake photos, together with these involving minors. Officers stated Ofcom may finally search court-backed measures that successfully block the service within the UK if X is discovered non-compliant and fails to take corrective motion.
Different international locations, together with Malaysia, Indonesia, and South Korea, have additionally opened investigations into Grok in a bid to guard minors.
Whereas States throughout America monitor the state of affairs, California is the primary to open an investigation into Grok. On Wednesday, California Legal professional Common Rob Bonta introduced a probe into xAI and Grok over the creation and unfold of non-consensual sexually specific photos of girls and kids.
“The avalanche of experiences detailing the non-consensual, sexually specific materials that xAI has produced and posted on-line in latest weeks is stunning. This materials, which depicts girls and kids in nude and sexually specific conditions, has been used to harass individuals throughout the web,” Bonta stated in a press release.
The investigation will study whether or not xAI’s deployment of Grok violated state legal guidelines governing non-consensual intimate imagery and baby sexual exploitation.
“I urge xAI to take rapid motion to make sure this goes no additional,” Bonta stated. “Now we have zero tolerance for the AI-based creation and dissemination of nonconsensual intimate photos or of kid sexual abuse materials.”
Regardless of the continued investigations, X stated it takes a “zero tolerance” stance for baby sexual exploitation, non-consensual nudity, and undesirable sexual content material.
“We take motion to take away high-priority violative content material, together with Baby Sexual Abuse Materials (CSAM) and non-consensual nudity, taking acceptable motion in opposition to accounts that violate our X Guidelines,” the corporate wrote. “We additionally report accounts searching for Baby Sexual Exploitation supplies to regulation enforcement authorities as obligatory.”
Each day Debrief Publication
Begin every single day with the highest information tales proper now, plus authentic options, a podcast, movies and extra.

