AI Recommendation Poisoning targets persistent memory features in AI assistants. Attackers embed hidden instructions in AI-triggering links (e.g., “Summarize with AI” buttons). These instructions may be stored as long-term memory. Future AI recommendations become biased without user awareness. This expands the AI attack surface beyond single-session prompt injection into cross-session behavioral manipulation.
READ MORE → Vortex Node AI Security Labs are currently in beta. Interactive, gamified AI security challenges run through Discord, building a community of security professionals.
READ MORE → New realsese of Garak to test LlMs
READ MORE → Openai released its new modell
READ MORE → Researchers Uncover 30+ Flaws in AI Coding Tools Enabling Data Theft and RCE Attacks
READ MORE → New version if deepseek
READ MORE → Update
READ MORE → CVE-2025-64755
READ MORE → When The Hacker Is Mostly An Orchestrator: Anthropic vs GTG-1002 a comentary
READ MORE → Critical vulnerabilities in AI inference servers impact Meta, NVIDIA, Microsoft, vLLM, SGLang, and Modular projects.
READ MORE → Chinese state sponsored hackers using antrophic in an attack
READ MORE → Intressting paper
READ MORE → Code execution via prompt injection in GitHub Copilot Chat
READ MORE → Threat Actors Developing Novel AI Capabilities
READ MORE → Researchers has found that Threat actors trick AI to leak data trough a vulnerability.
READ MORE → generative AI to analyze the XLoader 8.0 malware
READ MORE → Cyber-AutoAgent is a proactive security assessment tool that autonomously conducts intelligent penetration testing with natural language reasoning, dynamic tool selection, and evidence collection using AWS Bedrock, Litellm or local Ollama models with the core Strands framework.
READ MORE → SesameOp: Novel backdoor uses OpenAI
READ MORE →