The Threadlinqs MCP Server connects AI coding agents directly to live threat intelligence. 28 tools expose threat search, detection export, IOC enrichment, MITRE mapping, C2 tracking, and more through the Model Context Protocol standard.
The Model Context Protocol is an open standard that lets AI assistants connect to external data sources and tools. Instead of copy-pasting threat intelligence into a chat window, your AI agent calls the Threadlinqs MCP Server directly to search threats, pull detections, enrich IOCs, and map MITRE techniques in real time.
MCP works with Claude Code, Cursor, Windsurf, VS Code Copilot, and any MCP-compatible client. The server runs locally via stdio transport, keeping your API key on your machine.
One command installs the server. No build step, no Docker, no infrastructure.
TERMINALnpx intelthreadlinqs-mcp
Or add it to your MCP client configuration:
JSON{
"mcpServers": {
"threadlinqs-intel": {
"command": "npx",
"args": ["-y", "intelthreadlinqs-mcp"],
"env": {
"THREADLINQS_API_KEY": "your-api-key-here"
}
}
}
}
16 of 28 tools work on the free tier without an API key. Authenticated users unlock C2 intelligence, advanced correlations, threat simulations, and more.
Every tool is typed, documented, and returns structured JSON that AI agents can reason over directly.
An AI agent reviewing a pull request can query the MCP server to check whether any IOCs or techniques from recent threats appear in the codebase, then suggest detection rules directly.
AGENT PROMPT"Search for threats related to supply chain attacks,
then get the MITRE techniques for T1195 and show me
detection rules I can deploy to Sentinel."
# The agent calls:
# 1. search_threats({ query: "supply chain" })
# 2. get_mitre_technique({ technique_id: "T1195" })
# 3. export_detection({ id: 42, format: "kql" })
Tools are gated by subscription tier to match the data sensitivity and compute cost of each operation.
Give your AI agent access to live threat intelligence. Install the MCP server now.
[ npm_install ] view documentation