IDE Configuration File Poisoning
Description
Adversary Behavior: An adversary introduces or modifies project-level configuration files — such as AI rule files, instruction files, or prompt templates stored within the repository — to inject standing instructions that shape LLM code generation behavior across the project.
AI/IDE Mechanism: LLM-integrated IDEs automatically consume project-level configuration files as trusted input during every code generation or completion event. The IDE treats these files as authoritative project configuration with no distinction between legitimate and adversary-supplied instructions.
Execution Path: The adversary commits poisoned configuration files to a shared repository, submits them via pull request, or embeds them in project templates. Once present in the workspace, the IDE automatically ingests these files during context assembly, and the injected instructions are followed by the LLM in every subsequent code generation or completion event.
Security Impact: Injected instructions persist across sessions and affect all developers who clone or pull the repository, enabling persistent adversary influence over code generation output at scale across the entire development team.
Platforms
Detection
Monitor for creation or modification of known AI configuration file patterns in repositories. Implement file integrity monitoring for AI configuration files. Diff analysis should flag non-obvious instruction content, particularly instructions that reference security-relevant coding patterns.
Detecting Data Components (3)
Mitigations (1)
Data Sources
References
STIX Metadata
| type | attack-pattern |
| id | attack-pattern--295b93fb-27fe-426e-9f1b-ef1c6989dd51 |
| spec_version | 2.1 |
| created | 2026-02-23T00:00:00.000Z |
| modified | 2026-02-23T07:28:44.000Z |
| created_by_ref | identity--f5b5ec62-ffbd-4afd-9ee5-7c648406e189 |
| x_mitre_is_subtechnique | False |
| x_mitre_version | 0.1 |
| x_mitre_status | mapped |