AI Dangers: Cybercriminals use language models for crypto theft!
Cybercriminals are using AI-driven malware to steal crypto. Google warns of current threats and strengthens security measures.

AI Dangers: Cybercriminals use language models for crypto theft!
A recent report from the Google Threat Intelligence Group highlights a worrying trend in cybercrime, with cybercriminals increasingly using large language models (LLMs) to make malware more intelligent and flexible. The document, details of which can be found on Crypto News, reports five different families of AI-powered malware specifically designed to target high-value targets such as cryptocurrencies.
The novel malware families can rewrite themselves in real time, using LLMs such as Gemini and Qwen2.5 coders to generate, modify or hide malicious code. This represents a significant advance as the malware is now able to dynamically adjust its behavior.
Techniques and attack patterns
A notable example of this development is the PROMPTFLUX malware family, which uses a so-called “thinking robot” process. This process calls Gemini's API every hour to update its VBScript code. Another family, PROMPTSTEAL, associated with the Russian APT28 group, uses the Qwen model on Hugging Face to generate required Windows commands. This method of “just-in-time code creation” represents a significant shift from traditional malware that follows hard-coded logic.
The reports show that these AI-driven attacks are already active and specifically targeting high-priced assets such as crypto holdings. A particularly worrying aspect is the identity of the perpetrators: the North Korean group UNC1069, also known as Masan, uses AI technologies to carry out crypto theft. They abuse these technologies to query crypto wallets, create phishing scripts, and develop targeted social engineering attacks.
Google is responding to the threat
Google has already taken action in response to this threat. The company has disabled accounts associated with the above activities and implemented stricter security measures. This includes improved API monitoring mechanisms and faster filters to detect and mitigate potential attacks.
Developments in AI-powered malware not only raise questions about cybersecurity, but also highlight the urgency of being aware of these threats and taking appropriate action. The combination of advanced technology and criminal activity could lead to significant losses in the digital asset sector in the long term.