In short
- Google’s Menace Intelligence Group has launched its newest report on AI risks.
- The report suggests state-sponsored hackers use instruments like Google’s personal Gemini to hurry up their cyberattacks.
- Hackers are taking an curiosity in agentic AI to place AI totally answerable for assaults.
Google’s Menace Intelligence Group (GTIG) is sounding the alarm as soon as once more on the dangers of AI, publishing its newest report on how synthetic intelligence is being utilized by harmful state-sponsored hackers.
This staff has recognized a rise in mannequin extraction makes an attempt, a way of mental property theft the place somebody queries an AI mannequin repeatedly, making an attempt to study its inner logic and replicate it into a brand new mannequin.
Our new Google Menace Intelligence Group (GTIG) report breaks down how risk actors are utilizing AI for the whole lot from superior reconnaissance to phishing to automated malware growth.
Extra on that and the way we’re countering the threats ↓ https://t.co/NWUvNeBkn2
— Google Public Coverage (@googlepubpolicy) February 12, 2026
Whereas that is worrying, it isn’t the principle threat that Google is voicing concern over. The report goes on to warn of presidency–backed risk actors utilizing massive language fashions (LLMs) for technical analysis, focusing on and the speedy era of nuanced phishing lures.
The report highlights considerations over the Democratic Individuals’s Republic of Korea, Iran, the Individuals’s Republic of China and Russia.
Gemini and phishing assaults
These actors are reportedly utilizing AI instruments, equivalent to Google’s personal Gemini, for reconnaissance and goal profiling, utilizing open-source intelligence gathering on a big scale, in addition to to create hyper-personalized phishing scams.
“This exercise underscores a shift towards AI-augmented phishing enablement, the place the velocity and accuracy of LLMS can bypass the handbook labor historically required for sufferer profiling”, the report from Google states.
“Targets have lengthy relied on indicators equivalent to poor grammar, awkward syntax, or lack of cultural context to assist establish phishing makes an attempt. More and more, theat actors now leverage LLMs to generate hyper-personalized lures that may mirror the skilled tone of a goal group”.
For instance, if Gemini got the biography of a goal, it may generate an excellent persona and assist to greatest produce a state of affairs that may successfully seize their consideration. By utilizing AI, these risk actors also can extra successfully translate out and in of native languages.
As AI’s skill to generate code has grown, this has opened up doorways for its malicious use too, with these actors troubleshooting and producing malicious tooling utilizing AI’s vibe coding performance.
The report goes on to warn a couple of rising curiosity in experimenting with agentic AI. It is a type of synthetic intelligence which might act with a level of autonomy, supporting duties like malware growth and its automation.
Google notes its efforts to fight this drawback by means of quite a lot of elements. Together with creating Menace Intelligence studies a number of instances a 12 months, the agency has a staff continuously looking for threats. Google can also be implementing measures to bolster Gemini right into a mannequin which might’t be used for malicious functions.
By way of Google DeepMind, the staff makes an attempt to establish these threats earlier than they are often potential. Successfully, Google seems to be to id malicious capabilities, and take away them earlier than they’ll pose a threat.
Whereas it’s clear from the report that use of AI within the risk panorama has elevated, Google notes that there aren’t any breakthrough capabilities as of but. As a substitute, there’s merely a rise in the usage of instruments and dangers.
Day by day Debrief Publication
Begin each day with the highest information tales proper now, plus authentic options, a podcast, movies and extra.

