AIDE-001 Initial Access Coverage B ATT&CK Aligned Initial Access

IDE Configuration File Poisoning

AIDE-001 | ATT&CK: T1195.002

Description

Adversary Behavior: An adversary introduces or modifies project-level configuration files — such as AI rule files, instruction files, or prompt templates stored within the repository — to inject standing instructions that shape LLM code generation behavior across the project.

AI/IDE Mechanism: LLM-integrated IDEs automatically consume project-level configuration files as trusted input during every code generation or completion event. The IDE treats these files as authoritative project configuration with no distinction between legitimate and adversary-supplied instructions.

Execution Path: The adversary commits poisoned configuration files to a shared repository, submits them via pull request, or embeds them in project templates. Once present in the workspace, the IDE automatically ingests these files during context assembly, and the injected instructions are followed by the LLM in every subsequent code generation or completion event.

Security Impact: Injected instructions persist across sessions and affect all developers who clone or pull the repository, enabling persistent adversary influence over code generation output at scale across the entire development team.

Platforms

Windows macOS Linux

Detection

Monitor for creation or modification of known AI configuration file patterns in repositories. Implement file integrity monitoring for AI configuration files. Diff analysis should flag non-obvious instruction content, particularly instructions that reference security-relevant coding patterns.

Detecting Data Components (3)

Configuration File Creation
Events capturing creation of new AI-relevant configuration files, particularly when created by LLM agents.
Prompt Content
Full text of prompts sent to the LLM including system prompts, user instructions, and assembled context.
Configuration File Modification
Events capturing modifications to AI-relevant configuration files within the IDE and project workspace.

Mitigations (1)

AI Configuration File Integrity Monitoring
Implement file integrity monitoring and diff analysis for AI configuration files (.cursorrules, .github/copilot-instructions, MCP configs). Flag non-obvious instruction content and enforce review requirements for AI configuration changes in repositories.

Data Sources

File File Modification
File File Creation
Application Log Application Log Content

References

mitre-attack
Maps to Supply Chain Compromise: Compromise Software Supply Chain. Coverage Level B — needs updated procedure examples for IDE configuration files.
https://attack.mitre.org/techniques/T1195/002
Pillar Security Rules File Backdoor
New Vulnerability in GitHub Copilot and Cursor: How Hackers Can Weaponize Code Agents Through Compromised Rule Files
https://www.pillar.security/blog/new-vulnerability-in-github-copilot-and-cursor-how-hackers-can-weaponize-code-agents
Copilot Backdoor (Trail of Bits, Aug 2025)
Prompt injection via GitHub issue caused Copilot to insert backdoors into software projects that pass human code review.
https://blog.trailofbits.com/2025/08/
IDEsaster CVEs
CVE-2025-53773, CVE-2025-54130, CVE-2025-55012 — settings file modification to malicious interpreter paths; 100% of AI IDEs failed testing (Marzouk, 2025-2026)
https://byteiota.com/idesaster-30-cves-hit-cursor-github-copilot-all-ai-ides/
CVE-2026-26268 Cursor Sandbox Escape
Cursor AI editor sandbox escape via .git config write and Git hook injection, enabling RCE through prompt injection (Feb 2026)
https://blog.gopenai.com/the-code-editor-you-trust-just-became-a-trojan-horse-6aad59f5f0c6

STIX Metadata

type attack-pattern
id attack-pattern--295b93fb-27fe-426e-9f1b-ef1c6989dd51
spec_version 2.1
created 2026-02-23T00:00:00.000Z
modified 2026-02-23T07:28:44.000Z
created_by_ref identity--f5b5ec62-ffbd-4afd-9ee5-7c648406e189
x_mitre_is_subtechnique False
x_mitre_version 0.1
x_mitre_status mapped
Ask about AIDE-TACT
Thinking...

No account? Have an account?