AIDE-019 Supply Chain Propagation Coverage D ATT&CK Aligned Lateral Movement

Self-Replicating Prompt Propagation

AIDE-019 | ATT&CK: T1021

Description

Adversary Behavior: An adversary crafts prompt injection payloads that instruct the LLM to embed copies of the malicious prompt into outgoing content generated by the agent — git commits, pull requests, code comments, documentation, emails, or shared files — creating a self-replicating worm.

AI/IDE Mechanism: LLM coding agents generate content that is shared through normal development collaboration channels — version control, code review, document collaboration. The agent's content generation capability, combined with its ability to write to files and commit to repositories, provides the propagation mechanism. Unlike credential-based lateral movement (AIDE-016), self-replicating propagation is passive — the payload spreads through normal content sharing channels without requiring explicit network access or credential use.

Execution Path: Each infected LLM-integrated IDE that processes the poisoned content propagates the payload to additional systems through agent-generated commits, pull requests, and documentation. The payload typically includes both replication instructions (ensuring propagation) and an action-on-objective component (data exfiltration, backdoor insertion, or further reconnaissance). The propagation is exponential: a single initial infection can compromise every LLM-integrated IDE that reads the poisoned content.

Security Impact: The adversary achieves 1:N infection ratios where a single prompt injection compromises multiple downstream systems. The worm propagates through trusted content channels that are not subject to malware scanning, and each infected node generates unique payload instances through the LLM's generation process, evading signature-based detection.

Platforms

Windows macOS Linux

Detection

Compare LLM-generated output content against the input prompt and context window content to detect semantic similarity indicating self-replication. Monitor for generated content (commits, PRs, comments, docs) containing instruction-like patterns or prompt injection signatures. Implement output scanning that flags content resembling the original injection payload. Track the provenance of content in agent context — if generated content from one session appears as input in another, investigate for worm-like propagation. Monitor git commit content for embedded prompt injection patterns.

Detecting Data Components (5)

File Context Inclusion
Events capturing which local files are included in the LLM context window for each inference request.
Repository Context Retrieval
Events capturing retrieval of context from remote repositories, package registries, or documentation sources.
Code Suggestion Generated
Events capturing each code suggestion produced by the LLM, including code content, context, and security scan results.
Prompt Content
Full text of prompts sent to the LLM including system prompts, user instructions, and assembled context.
Response Content
Full text of LLM responses including generated code, explanations, and tool call requests.

Mitigations (4)

Agent Execution Sandboxing
Run AI coding agents in isolated security contexts with least-privilege permissions separate from the developer's ambient session. Implement task-scoped permission grants that restrict agent capabilities to files and tools relevant to the current task.
Generated Code Security Scanning
Apply inline SAST/security scanning to AI-generated code before presentation to the developer. Track vulnerability detection rates over time to identify adversarial steering patterns. Block acceptance of code with known vulnerability patterns.
Context Window Content Filtering
Apply input sanitization and prompt injection detection to content entering the LLM context window. Scan for instruction-like patterns in code comments, documentation, and external content. Implement content trust levels differentiating project files from external sources.
LLM Output Validation and Encoding Detection
Scan LLM-generated output for encoded data patterns (base64, URL encoding), embedded URLs, and content that diverges from the prompt intent. Implement output content policies that block exfiltration patterns in generated code, markdown rendering, and tool invocations.

Data Sources

Application Log Application Log Content
File File Creation
File File Modification
Network Traffic Network Traffic Content

References

mitre-attack
Maps to Remote Services. Coverage Level D — existing technique covers adversary use of remote services for lateral movement but does not address self-replicating prompt propagation through content sharing channels. The mechanism is fundamentally different from credential-based remote access.
https://attack.mitre.org/techniques/T1021
AgentHopper (Rehberger, Dec 2025, 39C3)
Self-propagating AI virus through git repositories — demonstrated exponential spread through AI coding assistants. The prompt payload persists in git repo state and activates when other AI assistants read the poisoned files.
https://embracethered.com/blog/
Morris II Worm (Cohen et al., Mar 2024)
First demonstrated AI worm — self-replicating adversarial prompts propagated via email across GenAI ecosystems. Achieved 5 kill chain stages including RAG-dependent persistence and self-replicating lateral movement.
https://arxiv.org/abs/2403.02817
ZombieAgent (Babo, Jan 2026)
ChatGPT vulnerabilities enabling data theft continuation and spread — combined retrieval-independent persistence with self-replication to propagate to additional users via shared content.
https://www.radware.com/blog/
Prompt Infection (Lee & Tiwari, Oct 2024)
LLM-to-LLM prompt injection within multi-agent systems — demonstrated cross-agent propagation where compromised agents infect other agents through shared context.
https://arxiv.org/abs/2410.07283

STIX Metadata

type attack-pattern
id attack-pattern--29bc0eba-f03d-42e8-a063-f382e93af7ab
spec_version 2.1
created 2026-02-23T02:04:20.000Z
modified 2026-02-23T02:04:20.000Z
created_by_ref identity--f5b5ec62-ffbd-4afd-9ee5-7c648406e189
x_mitre_is_subtechnique False
x_mitre_version 0.1
x_mitre_status mapped
Ask about AIDE-TACT
Thinking...

No account? Have an account?