Briefly
- Researchers discovered a immediate injection vulnerability in Google’s Antigravity AI coding platform.
- The flaw might permit attackers to execute instructions even with the platform’s Safe Mode enabled.
- Google mounted the difficulty Feb. 28 after researchers disclosed it in January, Pillar Safety stated.
Google has patched a vulnerability in its Antigravity AI coding platform that researchers say might permit attackers to run instructions on a developer’s machine by a immediate injection assault.
In keeping with a report by Cybersecurity agency Pillar Safety, the flaw concerned Antigravity’s find_by_name file search device, which handed consumer enter on to an underlying command-line utility with out validation. That allowed malicious enter to transform a file search right into a command execution activity, enabling distant code execution.
“Mixed with Antigravity’s skill to create recordsdata as a permitted motion, this permits a full assault chain: stage a malicious script, then set off it by a seemingly authentic search, all with out extra consumer interplay as soon as the immediate injection lands,” Pillar Safety researchers wrote.
Launched final November, Antigravity is Google’s AI-powered growth surroundings designed to assist programmers write, check, and handle code with the help of autonomous software program brokers. Pillar Safety disclosed the difficulty to Google on January 7, and Google acknowledged the report the identical day, marking the difficulty as mounted on February 28.
Google didn’t instantly reply to a request for remark by Decrypt.
Immediate injection assaults happen when hidden directions embedded in content material trigger an AI system to carry out unintended actions. As a result of AI instruments usually course of exterior recordsdata or textual content as a part of regular workflows, the system could interpret these directions as authentic instructions, permitting an attacker to set off actions on a consumer’s machine with out direct entry or extra interplay.
The specter of immediate injection assaults for big language fashions got here into renewed focus final summer season when ChatGPT developer OpenAI warned that its new ChatGPT agent may very well be compromised.
“Whenever you signal ChatGPT agent into web sites or allow connectors, it is going to be capable of entry delicate knowledge from these sources, resembling emails, recordsdata, or account data,” OpenAI wrote in a weblog put up.
To display the Antigravity difficulty, the researchers created a check script inside a challenge workspace and triggered it by the search device. When executed, the script opened the pc’s calculator software, exhibiting that the search perform may very well be became a command execution mechanism.
“Critically, this vulnerability bypasses Antigravity’s Safe Mode, the product’s most restrictive safety configuration,” the report stated.
The findings spotlight a broader safety problem dealing with AI-powered growth instruments as they start to execute duties autonomously.
“The business should transfer past sanitization-based controls towards execution isolation. Each native device parameter that reaches a shell command is a possible injection level,” Pillar Safety stated. “Auditing for this class of vulnerability is now not non-obligatory, and it’s a prerequisite for transport agentic options safely.”
Day by day Debrief Publication
Begin daily with the highest information tales proper now, plus authentic options, a podcast, movies and extra.

