AIDE-006 Collection Coverage B ATT&CK Aligned Credential Access

LLM-Mediated Credential Harvesting

AIDE-006 | ATT&CK: T1552.001

Description

Adversary Behavior: An adversary uses prompt injection — via configuration files, context poisoning, or tool return values — to instruct the LLM assistant to locate and extract sensitive credentials stored in the project workspace.

AI/IDE Mechanism: The LLM assistant has read access to project files and potentially environment variables as part of its standard operation. This access scope enables the model to search for patterns matching API keys, database connection strings, authentication tokens, private keys, and other secrets when directed to do so by injected instructions.

Execution Path: The adversary injects credential search instructions through any available injection vector. The LLM is directed to scan workspace files and environment variables for credential patterns. Extracted credentials are then exfiltrated through the LLM's available output channels: included in generated code, embedded in tool call arguments (e.g., web requests, file writes to shared locations), or returned as part of the LLM's visible response.

Security Impact: Credential compromise provides the adversary with direct access to external services, databases, APIs, and infrastructure resources associated with the harvested credentials, enabling further lateral movement and data access beyond the development environment.

Platforms

Windows macOS Linux

Detection

Monitor LLM-initiated file access for known secret file patterns (.env, credentials.json, *.pem). Implement secret scanning on LLM-generated output before acceptance. Inspect outbound network requests from agent processes for credential-pattern content.

Detecting Data Components (1)

Prompt Content
Full text of prompts sent to the LLM including system prompts, user instructions, and assembled context.

Mitigations (2)

LLM Output Validation and Encoding Detection
Scan LLM-generated output for encoded data patterns (base64, URL encoding), embedded URLs, and content that diverges from the prompt intent. Implement output content policies that block exfiltration patterns in generated code, markdown rendering, and tool invocations.
Credential Isolation from AI Agents
Prevent AI agent processes from accessing the developer's credential stores, SSH key directories, cloud configuration files, and authentication tokens. Use credential proxies that provide task-scoped, time-limited access.

Data Sources

File File Access
Network Traffic Network Traffic Content
Command Command Execution

References

mitre-attack
Maps to Unsecured Credentials: Credentials In Files. Coverage Level B — needs LLM-mediated procedure examples.
https://attack.mitre.org/techniques/T1552/001
AgentFlayer (Zenity Labs, Aug 2025)
Cursor IDE credential exfiltration via Jira ticket injection — demonstrated pipeline-based lateral movement to steal secrets.
https://www.zenity.io/blog/agentflayer
CamoLeak (Legit Security, Jun 2025)
GitHub Copilot PR data exfiltration via camo URLs. CVSS 9.6. Secrets extracted from pull request context.
https://www.legitsecurity.com/blog/camoleak
IDEsaster CVEs
CVE-2025-49150, CVE-2025-53097, CVE-2025-58335 — AI tricked into writing secrets to JSON with remote $schema URL for validation exfiltration (Marzouk, 2025-2026)
https://byteiota.com/idesaster-30-cves-hit-cursor-github-copilot-all-ai-ides/
Knostic - AI Assistants Leak Secrets
.env files and MCP config automatically ingested by Claude Code, Cursor, Windsurf; secrets exposed through context window (Knostic, 2025)
https://www.knostic.ai/blog/ai-coding-assistants-leaking-secrets
RoguePilot
GitHub Copilot Codespaces token exfiltration via passive prompt injection in Issues; enables repository takeover (Orca Security, 2025)
https://orca.security/resources/blog/roguepilot-github-copilot-vulnerability/

STIX Metadata

type attack-pattern
id attack-pattern--c87cb8a1-9f64-481a-bda0-d17d8c7bf58d
spec_version 2.1
created 2026-02-23T00:00:00.000Z
modified 2026-02-23T00:00:00.000Z
created_by_ref identity--f5b5ec62-ffbd-4afd-9ee5-7c648406e189
x_mitre_is_subtechnique False
x_mitre_version 0.1
x_mitre_status mapped
Ask about AIDE-TACT
Thinking...

No account? Have an account?