What is Detection Engineering?
Detection engineering is the practice of designing, building, testing, and maintaining rules and logic that identify malicious activity in security monitoring systems, transforming threat intelligence into automated, production-grade detections.
Detection Engineering Explained
Every organization generates enormous volumes of security telemetry — endpoint logs, network flows, authentication events, cloud audit trails. Detection engineering is the discipline that turns that raw telemetry into alerts that matter. A detection engineer writes the rules that fire when an adversary dumps credentials from LSASS, when a phishing payload executes a PowerShell download cradle, or when a privileged account authenticates from an impossible geographic location.
Unlike traditional signature-based antivirus, modern detection engineering focuses on behavioral patterns. Instead of matching a known malware hash, a well-crafted detection rule identifies the technique itself: parent-child process relationships, anomalous API call sequences, suspicious registry modifications, or lateral movement patterns. This approach survives adversary retooling because the underlying behavior — the TTP — is far harder to change than a file hash or C2 domain.
Detection engineering sits at the intersection of threat intelligence, software engineering, and security operations. It borrows practices from software development — version control, code review, unit testing, CI/CD pipelines — and applies them to detection logic, a methodology increasingly called Detection as Code.
The Detection Lifecycle
Building effective detections follows a structured lifecycle. Each stage ensures that rules are grounded in real adversary behavior and validated before they reach production.
Detection Rule Formats
Security detections are written in query languages specific to the monitoring platform. Four formats dominate the industry.
stats, eval, lookup, and tstats. It is the most widely deployed SIEM language in enterprise SOCs.where, project, summarize, and join. Its deep integration with Microsoft 365 Defender and Entra ID makes it essential for organizations in the Microsoft ecosystem.Example: Detecting LSASS Credential Dumping
The same adversary technique — accessing LSASS process memory to extract credentials (MITRE T1003.001) — expressed in three detection formats:
SPLindex=windows sourcetype=sysmon EventCode=10
TargetImage="*\\lsass.exe"
GrantedAccess IN ("0x1010", "0x1410", "0x1438", "0x143a")
NOT SourceImage IN ("*\\csrss.exe", "*\\svchost.exe", "*\\MsMpEng.exe")
| stats count by SourceImage, TargetImage, GrantedAccess, Computer
| where count > 0
KQLDeviceEvents
| where ActionType == "OpenProcessApiCall"
| where FileName == "lsass.exe"
| where InitiatingProcessFileName !in ("csrss.exe", "svchost.exe", "MsMpEng.exe")
| where AdditionalFields has_any ("0x1010", "0x1410", "0x1438", "0x143a")
| project Timestamp, DeviceName, InitiatingProcessFileName, FileName
SIGMAtitle: LSASS Memory Access - Credential Dumping
status: production
logsource:
category: process_access
product: windows
detection:
selection:
TargetImage|endswith: '\lsass.exe'
GrantedAccess:
- '0x1010'
- '0x1410'
- '0x1438'
- '0x143a'
filter_legitimate:
SourceImage|endswith:
- '\csrss.exe'
- '\svchost.exe'
- '\MsMpEng.exe'
condition: selection and not filter_legitimate
level: high
tags:
- attack.credential_access
- attack.t1003.001
The Detection Gap Problem
The MITRE ATT&CK framework catalogs over 600 techniques and sub-techniques across 14 tactics. Most organizations have detection coverage for fewer than 20% of them. This creates a vast detection gap — the space between what adversaries can do and what defenders can see.
Detection debt accumulates when high-priority techniques lack any detection coverage. It is calculated as: debt_score = technique_severity * exploitation_frequency * (1 - coverage_ratio). A technique used by 15 tracked threat actors with zero detection rules represents critical debt. Like technical debt in software, detection debt compounds — each undetected technique is an open door for adversaries, and the longer it goes unaddressed, the higher the risk of a breach that could have been caught.
Several factors drive detection debt:
- Log source gaps — Techniques targeting cloud APIs, SaaS platforms, or IoT devices require data sources that many organizations have not onboarded into their SIEM.
- Analyst bandwidth — Writing, testing, and tuning a single high-quality detection rule takes 4-8 hours. With hundreds of uncovered techniques, teams cannot keep pace manually.
- Platform fragmentation — Organizations running Splunk, Sentinel, and CrowdStrike need separate rules for each, tripling the engineering effort.
- Evasion evolution — Adversaries continuously develop new bypasses for existing detections, requiring ongoing tuning and variant development.
- False positive fatigue — Overly broad rules generate noise that drowns out real threats, leading analysts to ignore or disable detections entirely.
How Threadlinqs Solves It
Threadlinqs Intelligence eliminates the blank-page problem for detection engineers by shipping pre-built, threat-mapped detection rules in every format your SOC needs.
- Three Formats per Threat — Every threat in the platform ships with SPL, KQL, and Sigma rules. Copy and deploy directly into Splunk, Microsoft Sentinel, or any Sigma-compatible SIEM.
- MITRE Coverage Mapping — Every detection is tagged to specific ATT&CK techniques and sub-techniques. The platform generates coverage heatmaps showing which tactics are well-defended and where gaps remain.
- Detection Debt Scoring — The advanced correlations engine calculates debt scores for uncovered techniques, ranking them by severity and exploitation frequency so teams can prioritize rule development where it matters most.
- Threat-Linked Context — Each rule is tied to its originating threat, complete with CVE mappings, actor attribution, IOC indicators, exploitation timelines, and adversary simulation scripts for validation.
- Detection Library — A filterable, searchable library of all 3,553 rules with multi-select filtering by type, severity, confidence, MITRE tactic, data source, and author. Analysts can build targeted rule sets for specific threat campaigns in minutes.
- API and MCP Access — Detection rules are accessible via REST API and the Threadlinqs MCP server, enabling integration with CI/CD pipelines, SOAR platforms, and automated deployment workflows.
Frequently Asked Questions
What skills do detection engineers need?
Detection engineering requires a blend of offensive and defensive security expertise. Core technical skills include proficiency in at least one SIEM query language (SPL, KQL, or SQL), deep understanding of operating system internals (Windows event logs, Sysmon, Linux auditd, macOS unified logging), and familiarity with the MITRE ATT&CK framework for mapping techniques to detection logic. Detection engineers must also understand log source architecture, data normalization (CIM, OCSF, ECS), and the performance implications of their queries on SIEM infrastructure. The strongest detection engineers have backgrounds in threat hunting, incident response, or red teaming, which gives them intuition about adversary evasion techniques and the creativity to write rules that survive tooling changes.
What is the difference between SPL, KQL, and Sigma?
SPL (Search Processing Language) is Splunk's proprietary query language, built for searching and transforming massive volumes of machine data. It uses a pipe-delimited syntax with powerful commands like tstats for accelerated searches and eval for field calculations. KQL (Kusto Query Language) is Microsoft's query language that powers Azure Sentinel, Microsoft Defender, and Azure Data Explorer, using a clean pipe-forward syntax optimized for tabular data operations. Sigma is a vendor-neutral, open-source format written in YAML that can be compiled into SPL, KQL, and 30+ other backend formats using sigma-cli or pySigma. Sigma acts as a universal interchange format — write a detection once and deploy it to any SIEM. In practice, most mature detection engineering teams maintain a Sigma-first workflow and compile to platform-specific formats for deployment.
How do you measure detection coverage?
Detection coverage is primarily measured by mapping active detection rules to the MITRE ATT&CK matrix and calculating the percentage of techniques and sub-techniques with at least one production rule. Key metrics include: technique coverage ratio (covered techniques / total relevant techniques), detection density (average rules per covered technique — higher density provides defense-in-depth), tactic coverage distribution (ensuring no single tactic like initial access or lateral movement is a blind spot), mean time to detection for covered techniques, and false positive rate per rule. Threadlinqs automates this measurement with its MITRE coverage heatmap and detection debt scoring engine, which ranks uncovered techniques by how frequently real-world threat actors exploit them.