AI Recommendation Poisoning: The Next Layer of Prompt Injection
AI Recommendation Poisoning targets persistent memory features in AI assistants. Attackers embed hidden instructions in AI-triggering links (e.g., “Summarize with AI” buttons). These instructions may be stored as long-term memory. Future AI recommendations become biased without user awareness. This expands the AI attack surface beyond single-session prompt injection into cross-session behavioral manipulation.
READ MORE →