// glossary

What is Detection Engineering?

Detection engineering is the practice of designing, building, testing, and maintaining rules and logic that identify malicious activity in security monitoring systems, transforming threat intelligence into automated, production-grade detections.

Detection Engineering Explained

Every organization generates enormous volumes of security telemetry — endpoint logs, network flows, authentication events, cloud audit trails. Detection engineering is the discipline that turns that raw telemetry into alerts that matter. A detection engineer writes the rules that fire when an adversary dumps credentials from LSASS, when a phishing payload executes a PowerShell download cradle, or when a privileged account authenticates from an impossible geographic location.

Unlike traditional signature-based antivirus, modern detection engineering focuses on behavioral patterns. Instead of matching a known malware hash, a well-crafted detection rule identifies the technique itself: parent-child process relationships, anomalous API call sequences, suspicious registry modifications, or lateral movement patterns. This approach survives adversary retooling because the underlying behavior — the TTP — is far harder to change than a file hash or C2 domain.

Detection engineering sits at the intersection of threat intelligence, software engineering, and security operations. It borrows practices from software development — version control, code review, unit testing, CI/CD pipelines — and applies them to detection logic, a methodology increasingly called Detection as Code.

The Detection Lifecycle

Building effective detections follows a structured lifecycle. Each stage ensures that rules are grounded in real adversary behavior and validated before they reach production.

01
Hypothesis
Identify a threat behavior to detect, typically sourced from threat intelligence, incident reports, red team findings, or MITRE ATT&CK gap analysis.
02
Rule Creation
Write detection logic in the target query language (SPL, KQL, Sigma). Define the data sources, conditions, thresholds, and exclusions needed.
03
Testing
Validate against known-good and known-bad datasets. Run atomic red team tests or adversary simulations to confirm the rule triggers on real attack behavior.
04
Deployment
Push the validated rule to production SIEM/XDR. Set severity, alert routing, and response playbooks. Monitor initial alert volume for false positives.
05
Tuning
Continuously refine thresholds, add exclusions for legitimate activity, and update logic as adversary techniques evolve or data sources change.

Detection Rule Formats

Security detections are written in query languages specific to the monitoring platform. Four formats dominate the industry.

SPL
Splunk Enterprise / Splunk Cloud
Search Processing Language is Splunk's query language for searching, filtering, and transforming machine data. SPL excels at complex data transformations with commands like stats, eval, lookup, and tstats. It is the most widely deployed SIEM language in enterprise SOCs.
KQL
Microsoft Sentinel / Defender / Azure Data Explorer
Kusto Query Language powers Microsoft's security ecosystem. KQL uses a pipe-forward syntax with operators like where, project, summarize, and join. Its deep integration with Microsoft 365 Defender and Entra ID makes it essential for organizations in the Microsoft ecosystem.
Sigma
Universal / Vendor-Neutral YAML
Sigma rules are written in YAML and can be converted into SPL, KQL, and 30+ other SIEM formats. Sigma serves as a lingua franca for detection sharing: community rules on SigmaHQ cover thousands of techniques, and the format is trivially portable across platforms.
YARA
File / Memory Scanning
YARA rules identify malware by matching byte patterns, strings, and conditions within files or process memory. While not a SIEM query language, YARA is critical for malware classification, threat hunting on disk, and enriching file-based IOCs with behavioral context.

Example: Detecting LSASS Credential Dumping

The same adversary technique — accessing LSASS process memory to extract credentials (MITRE T1003.001) — expressed in three detection formats:

SPLindex=windows sourcetype=sysmon EventCode=10
  TargetImage="*\\lsass.exe"
  GrantedAccess IN ("0x1010", "0x1410", "0x1438", "0x143a")
  NOT SourceImage IN ("*\\csrss.exe", "*\\svchost.exe", "*\\MsMpEng.exe")
| stats count by SourceImage, TargetImage, GrantedAccess, Computer
| where count > 0
KQLDeviceEvents
| where ActionType == "OpenProcessApiCall"
| where FileName == "lsass.exe"
| where InitiatingProcessFileName !in ("csrss.exe", "svchost.exe", "MsMpEng.exe")
| where AdditionalFields has_any ("0x1010", "0x1410", "0x1438", "0x143a")
| project Timestamp, DeviceName, InitiatingProcessFileName, FileName
SIGMAtitle: LSASS Memory Access - Credential Dumping
status: production
logsource:
  category: process_access
  product: windows
detection:
  selection:
    TargetImage|endswith: '\lsass.exe'
    GrantedAccess:
      - '0x1010'
      - '0x1410'
      - '0x1438'
      - '0x143a'
  filter_legitimate:
    SourceImage|endswith:
      - '\csrss.exe'
      - '\svchost.exe'
      - '\MsMpEng.exe'
  condition: selection and not filter_legitimate
level: high
tags:
  - attack.credential_access
  - attack.t1003.001

The Detection Gap Problem

The MITRE ATT&CK framework catalogs over 600 techniques and sub-techniques across 14 tactics. Most organizations have detection coverage for fewer than 20% of them. This creates a vast detection gap — the space between what adversaries can do and what defenders can see.

The Detection Debt Equation

Detection debt accumulates when high-priority techniques lack any detection coverage. It is calculated as: debt_score = technique_severity * exploitation_frequency * (1 - coverage_ratio). A technique used by 15 tracked threat actors with zero detection rules represents critical debt. Like technical debt in software, detection debt compounds — each undetected technique is an open door for adversaries, and the longer it goes unaddressed, the higher the risk of a breach that could have been caught.

Several factors drive detection debt:

How Threadlinqs Solves It

Threadlinqs Intelligence eliminates the blank-page problem for detection engineers by shipping pre-built, threat-mapped detection rules in every format your SOC needs.

3,553
Detection Rules
3
Formats per Threat
465
MITRE Techniques
344
Mapped Threats

Frequently Asked Questions

What skills do detection engineers need?

Detection engineering requires a blend of offensive and defensive security expertise. Core technical skills include proficiency in at least one SIEM query language (SPL, KQL, or SQL), deep understanding of operating system internals (Windows event logs, Sysmon, Linux auditd, macOS unified logging), and familiarity with the MITRE ATT&CK framework for mapping techniques to detection logic. Detection engineers must also understand log source architecture, data normalization (CIM, OCSF, ECS), and the performance implications of their queries on SIEM infrastructure. The strongest detection engineers have backgrounds in threat hunting, incident response, or red teaming, which gives them intuition about adversary evasion techniques and the creativity to write rules that survive tooling changes.

What is the difference between SPL, KQL, and Sigma?

SPL (Search Processing Language) is Splunk's proprietary query language, built for searching and transforming massive volumes of machine data. It uses a pipe-delimited syntax with powerful commands like tstats for accelerated searches and eval for field calculations. KQL (Kusto Query Language) is Microsoft's query language that powers Azure Sentinel, Microsoft Defender, and Azure Data Explorer, using a clean pipe-forward syntax optimized for tabular data operations. Sigma is a vendor-neutral, open-source format written in YAML that can be compiled into SPL, KQL, and 30+ other backend formats using sigma-cli or pySigma. Sigma acts as a universal interchange format — write a detection once and deploy it to any SIEM. In practice, most mature detection engineering teams maintain a Sigma-first workflow and compile to platform-specific formats for deployment.

How do you measure detection coverage?

Detection coverage is primarily measured by mapping active detection rules to the MITRE ATT&CK matrix and calculating the percentage of techniques and sub-techniques with at least one production rule. Key metrics include: technique coverage ratio (covered techniques / total relevant techniques), detection density (average rules per covered technique — higher density provides defense-in-depth), tactic coverage distribution (ensuring no single tactic like initial access or lateral movement is a blind spot), mean time to detection for covered techniques, and false positive rate per rule. Threadlinqs automates this measurement with its MITRE coverage heatmap and detection debt scoring engine, which ranks uncovered techniques by how frequently real-world threat actors exploit them.