The French National Cybersecurity Agency ANSSI released a comprehensive 12-page threat synthesis on February 4, 2026, detailing how generative artificial intelligence is being weaponized by cybercriminals and state-sponsored hackers while simultaneously becoming a target for sophisticated attacks. The report titled Generative AI and Cyberattacks provides an assessment of threats observed throughout 2025 and calls for continuous vigilance as the landscape evolves rapidly.
According to the ANSSI report, no cyberattacks using artificial intelligence have been reported against French entities to date, and no AI system has demonstrated the capability to autonomously execute all stages of a cyberattack. However, the agency warns that generative AI significantly improves the level, quantity, diversity and effectiveness of attacks, particularly against poorly secured environments. State-sponsored threat actors from Iran, China, North Korea and Russia have been identified using commercial AI services like Google Gemini for reconnaissance, phishing content generation and malware development.
The report identifies several alarming developments in AI-powered offensive tools. Google detected Promptflux, a polymorphic malware that uses the Gemini API to completely rewrite its source code every hour to evade detection. Researchers at New York University developed PromptLock, a proof-of-concept ransomware that dynamically generates attack scripts at runtime. Criminal marketplaces now offer jailbreak-as-a-service platforms like EscapeGPT and LoopGPT, while unrestricted AI models such as WormGPT 4 are trained directly on malicious code and phishing templates for approximately one hundred dollars per month.
ANSSI also highlights that AI systems themselves are becoming high-value targets. Research from the UK AI Security Institute and Alan Turing Institute demonstrated that only 250 poisoned documents are sufficient to corrupt any generative AI model regardless of database size. Attackers exploit slopsquatting by identifying fictional software package names hallucinated by AI systems and creating real malware with those names to poison software supply chains. Between 2022 and 2023, more than 100,000 ChatGPT user accounts were compromised through infostealers like Rhadamanthys and sold on criminal forums.
The agency recommends organizations consult the ANSSI guide on Security Recommendations for Generative AI Systems when implementing LLM solutions. The report emphasizes that while advanced actors use AI for performance gains and scaling operations, less experienced attackers leverage it as a learning tool. In all cases, generative AI enables malicious actors to operate faster and at greater scale, necessitating regular threat reassessment and robust security measures.
--- KEY POINTS FROM THE ANSSI REPORT (CERTFR-2026-CTI-001) ---
AI AS AN ATTACK TOOL: State actors from 42 groups (10 Iran, 20 China, 9 North Korea, 3 Russia) have used Google Gemini. AI is used for social engineering, fake profiles, phishing content, malware development (Promptflux, PromptLock), and analyzing exfiltrated data. Criminal AI services (WormGPT, FraudGPT, EvilGPT) cost around $100/month. No AI can yet conduct a complete autonomous cyberattack.
AI AS A TARGET: Only 250 malicious documents can poison any AI model. Slopsquatting exploits AI hallucinations to create supply chain attacks. MCP servers connecting LLMs to external tools expand attack surfaces. Over 100,000 ChatGPT accounts were stolen via infostealers. Samsung employees accidentally leaked semiconductor secrets through ChatGPT in 2023.
RECOMMENDATIONS: Implement strict data compartmentalization. Regularly reassess AI-related threats. Follow ANSSI security guidelines for generative AI deployment. Monitor for compromised AI accounts and poisoned training data.
Comments