Based on research published by Microsoft Security (February 2026).


TL;DR

AI Recommendation Poisoning targets persistent memory features in AI assistants.

Attackers embed hidden instructions in AI-triggering links (e.g., “Summarize with AI” buttons).

These instructions may be stored as long-term memory.

Future AI recommendations become biased without user awareness.

This expands the AI attack surface beyond single-session prompt injection into cross-session behavioral manipulation.


From Prompt Injection to Memory Manipulation

We are already familiar with:

Prompt injection

Indirect prompt attacks

RAG poisoning

Tool abuse


Original Source