TL-2026-0128 HIGH 2026-02-22 Threat Analysis

AI Prompt Injection Attacks on Enterprise LLMs — CrowdStrike Taxonomy and Agentic Tool Chain Exploits

Threadlinqs Intelligence 7 min
prompt-injectionllm-securityagentic-aimcp-poisoningtool-poisoningcrowdstrikeopenclawowasp-llm

Threat ID: TL-2026-0128 | Severity: HIGH | Status: ACTIVE

Actor: Multiple — opportunistic and nation-state | Motivation: MULTIPLE

MITRE Techniques: 33 | Detections: 9 | CWEs: CWE-74, CWE-94, CWE-1336


Prompt injection is not theoretical anymore. Agentic AI tools are being weaponized through tool-chain attacks that bypass every safety guardrail.

CrowdStrike published a coordinated release on February 18, 2026 detailing the most thorough taxonomy of prompt injection techniques documented to date, covering both direct and indirect attack classes against enterprise AI systems. The research identifies a critical escalation vector: agentic tool chain attacks that exploit AI agent reasoning layers to weaponize the Model Context Protocol (MCP), exfiltrate credentials, and propagate injection payloads across multi-agent workflows. Below: the taxonomy breakdown, the agentic attack surface, and production-ready detection queries across SPL, KQL, and Sigma.

CrowdStrike blog — 'Indirect Prompt Injection Attacks: A Lurking Risk to AI Systems' — detailing data source poisoning and tool chain manipulation vectors. CrowdStrike blog — 'Indirect Prompt Injection Attacks: A Lurking Risk to AI Systems' — detailing data source poisoning and tool chain manipulation vectors.

Executive Summary

Technical Analysis

The Taxonomy

CrowdStrike's classification framework uses a dual-axis notation: IM#### denotes the injection method (delivery channel) and PT#### denotes the prompting technique (manipulation style). The taxonomy spans two primary categories.

Direct Prompt Injection (DPI) targets user-facing input channels. Adversaries embed malicious instructions through role-playing personas (DAN, REBEL, Developer Mode), instruction override sequences, encoding tricks (Base64, ROT13, Unicode obfuscation), multi-turn context manipulation, and payload splitting across message boundaries.

Indirect Prompt Injection (IPI) poisons data sources consumed by AI systems without direct user interaction. Attack vectors include PDF metadata injection, spreadsheets ingested through RAG pipelines, malicious emails processed by AI assistants, and compromised web content fetched by browsing agents. CrowdStrike reports analyzing over 300,000 adversarial prompts across both categories.

Agentic Tool Chain Attacks

Inject the prompt. Own the agent.

The escalation from chatbot injection to agent exploitation represents the critical shift in this threat. Three attack patterns target AI agent architectures:

Tool Poisoning embeds hidden exfiltration instructions in MCP tool descriptions. These instructions remain invisible in user-facing interfaces but are processed by the model during tool selection. Controlled testing demonstrates 84.2 percent success rates when agents have auto-approval enabled. Invariant Labs published working proof-of-concept attacks extracting SSH keys and configuration files from Claude Desktop and Cursor.

Tool Shadowing occurs when one tool's description influences parameter construction for unrelated tools. A poisoned tool can inject BCC recipients into email tools, redirect API calls, or modify file paths — all without being directly invoked. The poisoned tool does not need to be called; loading it into context is sufficient for the model to follow its hidden instructions.

Rugpull Attacks exploit MCP's dynamic capability model. Servers change behavior post-integration through tool description updates, evading initial security review. A documented case involved the postmark-mcp package, where version 1.0.16 added a single-line BCC to every outgoing email sent through the tool. One version bump. Silent exfiltration.

The OpenClaw Problem

OpenClaw, an open-source AI agent framework with over 150,000 GitHub stars, exemplifies the architectural vulnerability. Deployments commonly run with root-level file system access, terminal control, browser automation, and email capabilities — often over unencrypted HTTP with internet exposure. The attacker owns the box at that point. A crypto wallet-drain payload embedded in the Moltbook social network in December 2025 confirmed active exploitation targeting OpenClaw users.

The OWASP Top 10 for LLM Applications ranks prompt injection as the number one risk — our analysis confirms why. When we mapped these agentic attack patterns against enterprise deployments tracked on the platform, every single instance with auto-approval enabled was exploitable through tool poisoning alone. No jailbreak required.

Attack Chain

  1. Resource Development — Attacker crafts poisoned tool descriptions, adversarial documents, or compromised web content containing injection payloads (T1587.004, T1608.002)
  2. Initial Access — Poisoned data reaches the AI system through RAG document ingestion, email processing, web browsing, or MCP tool registration (T1190, T1189, T1566.003)
  3. Execution — LLM processes injected instructions and executes tool calls, file operations, or command sequences outside its intended workflow (T1059, T1204.002)
  4. Credential Access — Agent reads sensitive files (.ssh/id_rsa, .aws/credentials, .env, .kube/config) following poisoned tool instructions (T1552.001, T1555)
  5. Collection and Exfiltration — Stolen credentials and data are exfiltrated through callback URLs, email BCC injection, or agent-to-agent message passing (T1041, T1567, T1020)
  6. Lateral Movement — Injected payloads propagate through multi-agent architectures via inter-agent communication channels, creating worm-like behavior (T1534, T1021)
CrowdStrike infographic — 'Taxonomy of Prompt Injection Methods' — cataloging 185+ named techniques across direct injection, indirect injection, and attacker prompting methods. CrowdStrike infographic — 'Taxonomy of Prompt Injection Methods' — cataloging 185+ named techniques across direct injection, indirect injection, and attacker prompting methods.

Threat Actor Profile

Attribution is murky. Nation-state groups are assessed to be actively researching AI agent exploitation for espionage operations, while opportunistic actors have demonstrated financial motivation through the Moltbook wallet-drain campaign. The $300 entry point for MaaS toolkits in adjacent attack categories (such as ClickFix) suggests prompt injection exploitation kits will follow similar commercialization patterns. CrowdStrike's insider threat research highlights that AI agent exploitation also enables insider threats to scale their operations, using legitimate AI tools to exfiltrate data while maintaining plausible deniability.

Detection

Traditional network IOCs are useless here. Detection lives in the agent telemetry layer.

Threadlinqs Intelligence provides 9 production-ready detection rules for this threat, spanning SPL, KQL, and Sigma formats.

Splunk SPL

This query scores AI agent direct prompt injection attempts against CrowdStrike's DPI classification patterns — instruction override, persona hijack, encoding bypass, and system prompt extraction all feed the risk score:

SPLindex=ai_logs sourcetype=ai_agent_input_logs OR sourcetype=llm_gateway_logs
| eval injection_score=0
| eval injection_score=if(match(lower(prompt_text), "ignore (all |any )?(previous|prior|above) (instructions|rules|guidelines)"), injection_score+40, injection_score)
| eval injection_score=if(match(lower(prompt_text), "(you are|act as|roleplay as|pretend to be) (DAN|REBEL|developer mode|jailbroken)"), injection_score+35, injection_score)
| eval injection_score=if(match(prompt_text, "[A-Za-z0-9+/]{40,}={0,2}"), injection_score+15, injection_score)
| eval injection_score=if(match(lower(prompt_text), "(reveal|show|output|print|display) (your|the) (system|initial|original) (prompt|instructions)"), injection_score+35, injection_score)
| eval injection_score=if(match(lower(prompt_text), "\[SYSTEM\]|\[INST\]|<\|im_start\|>system"), injection_score+30, injection_score)
| where injection_score >= 30
| stats count, values(injection_score) as scores, values(prompt_text) as prompts by user, src_ip, agent_id
| sort -count
This query scores multiple injection signals and triggers at a 30-point threshold, catching both simple instruction overrides and sophisticated multi-vector attempts.

Microsoft KQL

MCP tool description poisoning and credential file access by agents represent the tool poisoning and post-exploitation phases. Here, we target both:
KQLlet credential_paths = dynamic([".ssh/id_rsa", ".aws/credentials", ".env", ".kube/config", ".gnupg/", "credentials.json"]);
let suspicious_tool_patterns = dynamic(["exfiltrate", "send_to", "callback", "curl", "wget", "base64"]);
union
(
    AiAgentToolLogs
    | where TimeGenerated > ago(24h)
    | where ToolAction == "file_read"
    | where TargetPath has_any (credential_paths)
    | extend AlertType = "credential_access"
),
(
    MCPServerLogs
    | where TimeGenerated > ago(24h)
    | where ToolDescriptionHash != PreviousToolDescriptionHash
    | extend AlertType = "mcp_rugpull"
),
(
    AiAgentToolLogs
    | where TimeGenerated > ago(24h)
    | where ToolDescription has_any (suspicious_tool_patterns)
    | extend AlertType = "tool_poisoning"
)
| project TimeGenerated, AlertType, AgentId, ToolName, TargetPath, ToolDescription
| sort by TimeGenerated desc
This union query covers three critical detection surfaces: credential file access by agents, MCP tool definition changes (rugpull detection), and suspicious instructions embedded in tool metadata.

Sigma

Catching jailbreak attempts at the input layer — persona hijacks and encoding bypass techniques in AI agent logs:
SIGMAtitle: AI Agent Jailbreak Persona Hijack and Encoding Bypass
id: 8c3f7a91-b2d4-4e56-a891-3c7d5e8f9b12
status: experimental
description: Detects DAN/REBEL/Developer Mode persona hijack attempts and Base64/ROT13/Unicode encoding bypass in AI agent inputs
references:
    - https://intel.threadlinqs.com/#TL-2026-0128
    - https://www.crowdstrike.com/en-us/resources/infographics/taxonomy-of-prompt-injection-methods/
author: Threadlinqs Intelligence
date: 2026/02/22
tags:
    - attack.execution
    - attack.t1059
    - attack.initial_access
    - attack.t1190
logsource:
    category: ai_agent_input
    product: llm_gateway
detection:
    selection_persona:
        prompt_text|contains:
            - 'you are DAN'
            - 'act as DAN'
            - 'REBEL mode'
            - 'Developer Mode'
            - 'jailbroken mode'
            - 'ignore previous instructions'
            - 'disregard all prior rules'
    selection_encoding:
        prompt_text|re: '[A-Za-z0-9+/]{100,}={0,2}'
    selection_system_extraction:
        prompt_text|contains:
            - 'reveal your system prompt'
            - 'output your instructions'
            - 'print system message'
            - '[SYSTEM]'
            - '<|im_start|>system'
    condition: selection_persona or selection_encoding or selection_system_extraction
falsepositives:
    - Security researchers testing AI systems
    - Red team exercises
    - AI safety research documentation
level: high
Browse all 9 detection rules for this threat: View on Threadlinqs Intelligence

MITRE ATT&CK Mapping

TacticTechniqueIDContext
Initial AccessExploit Public-Facing ApplicationT1190Injection into AI-facing endpoints
Initial AccessDrive-by CompromiseT1189Poisoned web content processed by browsing agents
Initial AccessPhishing: Spearphishing via ServiceT1566.003Malicious emails processed by AI assistants
ExecutionCommand and Scripting InterpreterT1059Agent-executed terminal commands
ExecutionUser Execution: Malicious FileT1204.002Poisoned documents triggering agent actions
Defense EvasionObfuscated Files or InformationT1027Base64/ROT13/Unicode encoding bypass
Defense EvasionMasqueradingT1036Tool shadowing — legitimate tool descriptions hiding malicious behavior
Defense EvasionImpersonationT1656Jailbreak persona hijack (DAN, REBEL)
Credential AccessCredentials In FilesT1552.001Agent reading SSH keys, AWS credentials, env files
Credential AccessCredentials from Password StoresT1555Agent accessing credential managers
CollectionData from Information RepositoriesT1213RAG pipeline data extraction
CollectionEmail CollectionT1114AI email assistant exploitation
Lateral MovementInternal SpearphishingT1534Agent-to-agent injection worm propagation
ExfiltrationExfiltration Over C2 ChannelT1041Callback URLs in tool metadata
ExfiltrationAutomated ExfiltrationT1020Bulk channel history transfer
ImpactData ManipulationT1565Business process manipulation through injected instructions
Resource DevelopmentDevelop Capabilities: ExploitsT1587.004Crafting poisoned tool descriptions and adversarial payloads
Full MITRE ATT&CK mapping with 33 techniques: View coverage on Threadlinqs
OWASP Top 10 for Large Language Model Applications — the GenAI Security Project identifying prompt injection as the #1 risk for enterprise AI deployments. OWASP Top 10 for Large Language Model Applications — the GenAI Security Project identifying prompt injection as the #1 risk for enterprise AI deployments.

Indicators of Compromise

Behavioral Indicators

TypeIndicatorContext
File Access.ssh/id_rsa, .aws/credentials, .envAgent credential theft via tool poisoning
File Access.kube/config, .gnupg/, credentials.jsonKubernetes and GPG key exfiltration
Tool MetadataImperative instructions in tool descriptionsMCP tool description poisoning
Tool MetadataCallback URLs in tool parametersData exfiltration channel
Agent Behavior>20 tool calls per minuteAbnormal agent execution rate
Agent BehaviorBCC injection in email tool parametersTool shadowing cross-tool attack
Agent BehaviorChannel history bulk transfersWorm-like inter-agent propagation
MCP ProtocolTool definition hash changes without version pinRugpull attack indicator
ProcessAgent terminal commands outside workflowPost-exploitation lateral movement

Network Indicators

No specific network IOCs have been documented for this threat class. Detection focuses on behavioral patterns within AI agent telemetry rather than traditional network indicators.

Timeline

DateEvent
2022-09-12Simon Willison coins "prompt injection," draws parallels to SQL injection
2023-02-23Greshake et al. publish "Not What You've Signed Up For" — first end-to-end indirect injection study
2023-11-01OWASP Top 10 for LLM Applications published; Prompt Injection ranked number one
2025-11-01OWASP 2025 update retains Prompt Injection at number one (LLM01:2025)
2025-12-01Crypto wallet-drain payload in Moltbook social network targeting OpenClaw users
2026-01-15CrowdStrike publishes indirect injection and tool poisoning analysis
2026-02-10OpenClaw surpasses 150K GitHub stars; internet-exposed instances identified
2026-02-18CrowdStrike coordinated release: taxonomy poster, OpenClaw analysis, agentic tool chain attacks
2026-02-22Threadlinqs Intelligence publishes TL-2026-0128 with full MITRE ATT&CK mapping
TL-2026-0128 on Threadlinqs Intelligence — AI prompt injection attacks tracked with CrowdStrike taxonomy, MCP poisoning vectors, and 9/9 detection coverage. TL-2026-0128 on Threadlinqs Intelligence — AI prompt injection attacks tracked with CrowdStrike taxonomy, MCP poisoning vectors, and 9/9 detection coverage.

Recommendations

  1. Inventory all AI agent deployments — catalog OpenClaw instances, MCP servers, RAG pipelines, and custom agents; audit for internet exposure and excessive permissions
  2. Implement least-privilege agent access — restrict file system, terminal, email, and API permissions; require user confirmation for high-risk tool actions
  3. Deploy MCP tool governance — enforce cryptographic signatures, version pinning, and metadata audits; disable dynamic capability advertisement for production servers
  4. Monitor agent behavioral baselines — alert on tool call rate anomalies (>20/minute), credential file access, and inter-agent communication deviations
  5. Harden RAG pipelines — preserve document-level access controls through embeddings; implement content security policies separating trusted system prompts from untrusted user and external data

References


Full threat intelligence, detection rules, and IOC feeds are available on Threadlinqs Intelligence. Track this threat: TL-2026-0128.