AIDE-009 Supply Chain Propagation Coverage B ATT&CK Aligned Initial Access

AI-Assisted Supply Chain Propagation

AIDE-009 | ATT&CK: T1195.002

Description

Adversary Behavior: An adversary poisons upstream project artifacts — libraries, frameworks, templates, documentation, or training data — so that when downstream developers use LLM-integrated IDEs, the AI assistant propagates the adversary's payload into the downstream codebase.

AI/IDE Mechanism: LLM-integrated IDEs assemble context from upstream dependencies, documentation, and example code to inform code generation. The LLM acts as an amplification and obfuscation layer: the adversary does not need to inject malicious code directly into the upstream dependency. Instead, the adversary poisons artifacts that influence the LLM's code generation through context poisoning, documentation manipulation, or example code alteration.

Execution Path: The adversary introduces manipulated artifacts into upstream projects consumed as dependencies or references by downstream developers. When downstream developers use their LLM-integrated IDE, the AI assistant ingests the poisoned upstream artifacts during context assembly and independently generates vulnerable or backdoored code in the downstream project based on the adversary's influence.

Security Impact: The supply chain propagation is obfuscated through the LLM's generation process — the downstream code is not a direct copy of an upstream payload but rather independently generated code that reflects the adversary's intent, making traditional supply chain detection methods ineffective.

Platforms

Windows macOS Linux

Detection

Implement provenance tracking for LLM context — record which files, documentation, and examples influenced each code generation event. Correlate vulnerability patterns in generated code with specific upstream dependencies.

Detecting Data Components (5)

External Context Fetch
Events capturing context retrieval from external sources beyond repositories including web pages and MCP resources.
Code Suggestion Accepted/Rejected
Events capturing the developer's decision to accept or reject a code suggestion.
File Context Inclusion
Events capturing which local files are included in the LLM context window for each inference request.
Repository Context Retrieval
Events capturing retrieval of context from remote repositories, package registries, or documentation sources.
Code Suggestion Generated
Events capturing each code suggestion produced by the LLM, including code content, context, and security scan results.

Mitigations (2)

Context Window Content Filtering
Apply input sanitization and prompt injection detection to content entering the LLM context window. Scan for instruction-like patterns in code comments, documentation, and external content. Implement content trust levels differentiating project files from external sources.
Generated Code Security Scanning
Apply inline SAST/security scanning to AI-generated code before presentation to the developer. Track vulnerability detection rates over time to identify adversarial steering patterns. Block acceptance of code with known vulnerability patterns.

Data Sources

Application Log Application Log Content
File File Modification

References

mitre-attack
Maps to Supply Chain Compromise: Compromise Software Supply Chain. Coverage Level B — AI-propagation vector needs procedure example.
https://attack.mitre.org/techniques/T1195/002
XOXO Cross-Origin Context Poisoning
Stealthy Cross-Origin Context Poisoning Attacks against AI Coding Assistants
https://arxiv.org/abs/2503.14281
AgentHopper (Rehberger, Dec 2025, 39C3)
Self-propagating AI virus through git repositories — exponential spread via AI coding assistants. Git-based persistence and propagation.
https://embracethered.com/blog/
Copilot Backdoor (Trail of Bits, Aug 2025)
Supply chain attack via GitHub issue injection into Copilot code generation pipeline.
https://blog.trailofbits.com/2025/08/
MalInstructCoder
Framework converting LLM backdoors into traditional malware; 75-86% ASR with 1% training data poisoning on CodeLlama, DeepSeek-Coder, StarCoder2 (arXiv:2404.18567, 2024)
https://arxiv.org/abs/2404.18567
Automated Dependency Side-Loading
Malicious IDE/browser extensions intercept AI communication and silently inject unauthorized dependencies into generated code (InstaTunnel, Feb 2026)
https://medium.com/@instatunnel/automated-dependency-side-loading-the-invisible-supply-chain-attack-via-ai-extensions-fe615eb03f19

STIX Metadata

type attack-pattern
id attack-pattern--a7a76b8b-5e6e-4395-99c8-3d7039714e11
spec_version 2.1
created 2026-02-23T00:00:00.000Z
modified 2026-02-23T00:00:00.000Z
created_by_ref identity--f5b5ec62-ffbd-4afd-9ee5-7c648406e189
x_mitre_is_subtechnique False
x_mitre_version 0.1
x_mitre_status mapped
Ask about AIDE-TACT
Thinking...

No account? Have an account?