In short
- Senator Warren despatched a letter to Protection Secretary Pete Hegseth demanding solutions over Grok’s Pentagon entry
- Safety companies have warned about Grok’s dangers, however the Pentagon seems to be pushing forward anyway.
- Grok’s historical past contains lurid deepfake pictures of minors, antisemitic outputs, and leaked conversations.
Senator Elizabeth Warren needs to know the way a chatbot that allegedly generated hundreds of thousands of deepfake pictures—together with compromising pictures depicting minors—ended up with keys to the Pentagon’s most labeled programs.
On Sunday, Warren despatched a four-page letter to Protection Secretary Pete Hegseth demanding solutions in regards to the Division of Protection’s resolution to present Elon Musk’s xAI entry to labeled army networks, which she stated was granted whereas a number of federal companies had been elevating pink flags.
“I write concerning my issues in regards to the Division of Protection’s (DoD) reported resolution to permit Elon Musk’s xAI to entry labeled programs regardless of issues raised by a number of federal companies, together with the Nationwide Safety Company (NSA) and the Basic Providers Company (GSA),” Warren wrote.
“I’m involved that Grok’s obvious lack of enough guardrails may pose critical dangers to the security of U.S. army personnel and to the cybersecurity of labeled programs,” she added, “particularly if Grok is given delicate army info and entry to operational programs.”
The Nationwide Safety Company, Warren’s letter notes, “performed a labeled assessment” and “decided Grok had specific safety issues that different fashions did not.” The Basic Providers Administration raised related alarms.
“Had been Grok to leak authorities info, this might reveal delicate army plans, U.S. intelligence efforts, and doubtlessly put service members at risk,” Warren wrote.
Neither concern seems to have slowed something down.
“It’s unclear what assurances or documentation xAI has offered to the Division of Protection about Grok’s safety safeguards, data-handling practices, or security controls, and whether or not DoD has evaluated these assurances earlier than reportedly permitting Grok entry to labeled programs,” the letter reads.
The timing could not be tougher to disregard. The identical day Warren’s letter went out, three Tennessee minors filed a federal class motion lawsuit in opposition to xAI, alleging Grok generated youngster sexual abuse materials based mostly on their actual pictures. The grievance accuses xAI of intentionally releasing Grok with out industry-standard safeguards, calling it “a enterprise alternative” to revenue from the exploitation of actual individuals, together with youngsters.
Final week, the Washington Put up reported {that a} Division of Authorities Effectivity (DOGE) worker below Musk’s oversight copied delicate Social Safety Administration information data on a whole bunch of hundreds of thousands of People, and supposed to make use of that information at their new tech startup.
Warren’s letter additionally cites Grok’s historical past of producing antisemitic content material, giving customers directions on commit murders and terrorist assaults, and operating wild with non-consensual deepfakes regardless of repeated guarantees of fixes. Lots of of hundreds of personal Grok conversations had been additionally discovered listed on Google final August.
Authorities testing confirmed Grok is extra prone than competing fashions to “information poisoning” assaults—the place manipulated information corrupts the system’s outputs—a critical vulnerability for a software being thought-about for weapons growth and battlefield intelligence. The Pentagon’s personal Chief of Accountable AI circulated inside memos about these dangers and stepped down shortly thereafter.
The deal itself got here collectively below uncommon circumstances. xAI was reportedly a late addition to the Pentagon’s AI contract pool, awarded a deal price as much as $200 million final July. The labeled entry settlement adopted in February, simply because the DoD was publicly feuding with Anthropic over security guardrails.
When requested about it, a Pentagon spokesperson instructed the Wall Road Journal that the division was “excited to have xAI, one in all America’s nationwide champion frontier AI corporations onboard and appears ahead to deploying Grok to its official AI platform GenAI.mil within the very close to future.”
That context issues. Anthropic had been the one AI firm with classified-ready programs, with Claude deployed in actual army operations. After Anthropic refused the Pentagon’s demand to make Claude obtainable for “all lawful functions”—particularly pushing again on autonomous weapons and mass home surveillance—the DoD labeled the corporate a provide chain threat. xAI and OpenAI had been introduced as replacements.
There are not any data of xAI questioning the attain of the “all lawful functions” normal. OpenAI was extra diplomatic about it, establishing some boundaries on a server degree.
Warren is asking Hegseth to reply by March 30 with the total textual content of the xAI settlement, all inside communications in regards to the deal, and solutions on whether or not any testing or analysis occurred earlier than entry was granted. One in every of her 10 questions asks straight whether or not safeguards exist to make sure Grok doesn’t trigger “faulty concentrating on selections” if deployed in vital operational programs.
Day by day Debrief E-newsletter
Begin day-after-day with the highest information tales proper now, plus authentic options, a podcast, movies and extra.

