AIDE-018 Persistence Coverage C ATT&CK Aligned Persistence

IDE Session Memory Persistence Poisoning

AIDE-018 | ATT&CK: T1546

Description

Adversary Behavior: An adversary poisons the LLM-integrated IDE's internal conversation memory, session context, or learned preference store to achieve retrieval-independent persistence that survives IDE restarts, project changes, and conversation resets.

AI/IDE Mechanism: Unlike configuration file poisoning (AIDE-001) which targets user-editable project files, or MCP configuration tampering (AIDE-004) which targets tool definitions, this technique targets the IDE's internal state management — conversation history databases, 'memory' features that store facts across sessions, learned coding preferences, and skill/context files that the IDE automatically ingests. Once stored, poisoned memory content is automatically incorporated into every subsequent LLM inference regardless of which project is open, which files are in context, or what the developer's query is.

Execution Path: The adversary injects malicious instructions through prompt injection via any vector, causing the IDE to store the payload in its persistent memory. The poisoned content provides guaranteed reactivation (retrieval-independent persistence) as opposed to the probabilistic reactivation of RAG-dependent persistence mechanisms.

Security Impact: The attack survives IDE restarts, project changes, and conversation resets because the poisoned content resides in the IDE's persistent storage layer, not in the ephemeral context window. This provides cross-project persistence — the adversary's instructions influence every project the developer works on, regardless of the original infection vector.

Platforms

Windows macOS Linux

Detection

Monitor IDE memory/conversation persistence stores for unexpected content modifications. Track changes to IDE internal databases, skill files, and learned preference stores. Implement integrity checking on IDE memory content — hash stored memories and alert on modifications not initiated through the IDE's explicit memory management interface. Flag memory entries containing instruction-like patterns, URL references, or encoded data. Audit IDE session restoration to detect injection of stored context that was not present in the developer's original conversations.

Detecting Data Components (4)

Configuration File Creation
Events capturing creation of new AI-relevant configuration files, particularly when created by LLM agents.
File Context Inclusion
Events capturing which local files are included in the LLM context window for each inference request.
Configuration File Modification
Events capturing modifications to AI-relevant configuration files within the IDE and project workspace.
Prompt Content
Full text of prompts sent to the LLM including system prompts, user instructions, and assembled context.

Mitigations (3)

Agent Execution Sandboxing
Run AI coding agents in isolated security contexts with least-privilege permissions separate from the developer's ambient session. Implement task-scoped permission grants that restrict agent capabilities to files and tools relevant to the current task.
AI Configuration File Integrity Monitoring
Implement file integrity monitoring and diff analysis for AI configuration files (.cursorrules, .github/copilot-instructions, MCP configs). Flag non-obvious instruction content and enforce review requirements for AI configuration changes in repositories.
Context Window Content Filtering
Apply input sanitization and prompt injection detection to content entering the LLM context window. Scan for instruction-like patterns in code comments, documentation, and external content. Implement content trust levels differentiating project files from external sources.

Data Sources

Application Log Application Log Content
File File Modification
File File Creation

References

mitre-attack
Maps to Event Triggered Execution. Coverage Level C — existing technique covers event-triggered persistence but lacks guidance for LLM conversation memory features as a persistence vector. The IDE's memory system triggers payload injection on every new inference event.
https://attack.mitre.org/techniques/T1546
Windsurf SpAIware (Rehberger, Aug 2025)
Memory-persistent data exfiltration from Windsurf IDE — adversary content stored in IDE memory feature provides retrieval-independent persistence across all future coding sessions.
https://embracethered.com/blog/
ChatGPT SpAIware (Rehberger, Sep 2024)
Persistent spyware implant via ChatGPT memory poisoning — browsed webpage caused malicious instructions to be stored in long-term memory, enabling continuous data exfiltration across all subsequent conversations.
https://arxiv.org/abs/2412.06090
CurXecute — CVE-2025-54135 (Aim Security, Jul 2025)
Cursor IDE RCE via MCP demonstrated retrieval-independent persistence through configuration state that persists across sessions.
https://research.checkpoint.com/2025/cursor-vulnerability-mcpoison/
MemoryGraft
Persistent compromise via poisoned experience retrieval; semantic imitation heuristic causes durable behavioral drift across sessions (arXiv:2512.16962, 2025)
https://arxiv.org/abs/2512.16962
CorruptRAG
Single-injection RAG poisoning demonstrates practical memory persistence attacks; existing defenses fail (arXiv:2504.03957, 2025)
https://arxiv.org/abs/2504.03957
AgentLAB
Memory poisoning attack type validated across 28 agentic environments; single-turn defenses ineffective (arXiv:2602.16901, Feb 2026)
https://arxiv.org/abs/2602.16901
BackdoorAgent
Memory-stage backdoor attacks achieve highest persistence at 77.97% on GPT backbones (arXiv:2601.04566, Jan 2026)
https://arxiv.org/abs/2601.04566

STIX Metadata

type attack-pattern
id attack-pattern--680d7f46-cb5c-4cb2-83ba-3e2a2ce8f9a8
spec_version 2.1
created 2026-02-23T02:04:20.000Z
modified 2026-02-23T02:04:20.000Z
created_by_ref identity--f5b5ec62-ffbd-4afd-9ee5-7c648406e189
x_mitre_is_subtechnique False
x_mitre_version 0.1
x_mitre_status mapped
Ask about AIDE-TACT
Thinking...

No account? Have an account?