Cross-Context Adversarial Prompt Injection
Description
Adversary Behavior: Adversaries manipulate the context provided to AI systems integrated into development environments to influence code generation output by poisoning context sources with adversarial content — through modified source files, documentation, comments, or other artifacts ingested by the AI system.
AI/IDE Mechanism: AI-enabled development tools automatically assemble context from project files, documentation, and other artifacts to inform code generation. The context assembly pipeline does not distinguish between benign project content and adversary-crafted payloads, treating all ingested artifacts as trusted input to the generation model.
Execution Path: The adversary places crafted content in files that the AI system ingests during context assembly. The adversary does not directly execute commands; instead, the adversarial context redirects the AI system's probabilistic generation process through carefully crafted semantic manipulations. The generated code achieves execution when the developer accepts and deploys it. This technique has been demonstrated with a 75.72% success rate across multiple production LLM models using semantically equivalent code transformations that evade traditional program analysis.
Security Impact: Adversaries can redirect code generation to produce vulnerable or backdoored code at scale, with the developer unknowingly accepting and deploying adversary-influenced output. The indirect execution path makes attribution and detection significantly more difficult than direct code injection.
Platforms
Detection
Analyze context window contents for natural-language instruction patterns embedded in non-configuration files. Compare generated code against known vulnerability patterns. Monitor for divergence between developer intent and generated output. Implement context integrity validation that flags files containing prompt-like instruction patterns in code comments, docstrings, and documentation.
Detecting Data Components (6)
Mitigations (1)
Data Sources
References
STIX Metadata
| type | attack-pattern |
| id | attack-pattern--d5d4aecd-fc94-4496-ab59-b7ab3812c0cb |
| spec_version | 2.1 |
| created | 2026-02-23T00:00:00.000Z |
| modified | 2026-02-23T00:00:00.000Z |
| created_by_ref | identity--f5b5ec62-ffbd-4afd-9ee5-7c648406e189 |
| x_mitre_is_subtechnique | False |
| x_mitre_version | 0.1 |
| x_mitre_status | candidate |