Published: March 2026 | Last reviewed: March 22, 2026
Glossary Updated March 2026 14 min read

What Is Detection Engineering?

Threadlinqs Intelligence A Complete Guide
TL;DR

Detection engineering is the discipline of building, testing, tuning, and maintaining the rules and logic that identify security threats in your environment. It sits at the intersection of threat intelligence, data engineering, and security operations — transforming knowledge about how attackers operate into automated detections written in languages like SPL, KQL, and Sigma.

// contents
  1. Definition
  2. Detection Engineering vs Threat Hunting
  3. Rule Formats: SPL, KQL, and Sigma
  4. MITRE ATT&CK Alignment
  5. The Detection Lifecycle
  6. False Positive Tuning
  7. Detection-as-Code
  8. How Threadlinqs Automates Detection Engineering
  9. FAQ

Definition

Detection engineering is the practice of designing, building, testing, and maintaining the automated rules and logic that identify malicious activity in an organization's environment. It is an engineering discipline — not an ad-hoc process of writing SIEM queries when an incident occurs. Detection engineers treat detection rules as software: they are designed against requirements, tested against sample data, version-controlled in Git, reviewed by peers, deployed through CI/CD pipelines, and continuously tuned based on operational feedback.

The field has emerged as a distinct specialty within security operations over the past five years, driven by the realization that buying security tools does not automatically produce security outcomes. A SIEM with no custom detections is an expensive log aggregator. An EDR with only vendor-provided rules misses the threats specific to your environment. Detection engineering bridges the gap between the tools organizations purchase and the coverage those tools actually deliver.

A detection engineer typically combines three skill sets: threat intelligence (understanding how attackers operate, what techniques they use, and which are most relevant to your threat landscape), data engineering (understanding what data sources are available, how they are structured, what fields are populated, and where the gaps are), and security operations (understanding how analysts triage alerts, what context they need, and what makes a detection operationally useful versus a noise generator).

Detection Engineering vs Threat Hunting

Detection engineering and threat hunting are complementary but distinct practices. Understanding the difference prevents organizational confusion about roles and responsibilities.

DimensionDetection EngineeringThreat Hunting
NatureAutomated, persistent, continuousManual, episodic, hypothesis-driven
GoalBuild rules that fire on known patternsFind threats that evade existing rules
OutputDetection rules in SPL/KQL/SigmaFindings, new IOCs, new hypotheses
TriggerData matches a predefined patternAnalyst formulates and tests a hypothesis
CoverageKnown TTPs and indicatorsUnknown threats and novel techniques
ScalabilityHigh — runs on every event 24/7Low — bounded by analyst time
Feedback loopAlert volume, FP rate, MTTDHypotheses validated, threats found

The two disciplines feed each other. Threat hunters use intelligence to form hypotheses, search for undiscovered threats, and when they find something new, the finding becomes the basis for a new automated detection. Detection engineers, in turn, identify coverage gaps that become hunting priorities. The best security teams treat this as a continuous cycle: hunt, detect, tune, repeat.

If you had to choose where to invest first, invest in detection engineering. Automated detections run 24/7 at machine speed across every event in your environment. A single well-written detection rule provides more consistent coverage than any individual human can, and it does not take sick days or switch shifts.

Rule Formats: SPL, KQL, and Sigma

Detection rules are written in the query language of the platform they run on. Three languages dominate the security detection landscape:

Splunk SPL

SPL (Search Processing Language) is the query language for Splunk Enterprise and Splunk Cloud. It uses a pipe-based syntax where data flows through a chain of commands — search to filter, where for conditions, stats for aggregation, eval for computed fields, and table for output formatting. SPL is the most widely deployed SIEM query language in enterprise security operations.

SPLindex=windows sourcetype="XmlWinEventLog:Microsoft-Windows-Sysmon/Operational" EventCode=1
| where match(ParentImage, "(?i)\\\\(cmd|powershell|wscript|cscript)\.exe$")
| where match(Image, "(?i)\\\\(whoami|net|nltest|dsquery|ipconfig|systeminfo)\.exe$")
| stats count dc(Image) as unique_commands values(Image) as commands by ComputerName User ParentImage
| where unique_commands >= 3
| sort -unique_commands

This SPL detection identifies reconnaissance activity — multiple discovery commands executed from a single shell session, a pattern common in post-exploitation.

Microsoft KQL

KQL (Kusto Query Language) powers Microsoft Sentinel, Defender for Endpoint, and the broader Microsoft security ecosystem. KQL is a read-only query language designed for large-scale log analytics, with a syntax that emphasizes tabular data operators like where, extend, summarize, project, and join.

KQLDeviceProcessEvents
| where Timestamp > ago(24h)
| where InitiatingProcessFileName in~ ("cmd.exe", "powershell.exe", "wscript.exe")
| where FileName in~ ("whoami.exe", "net.exe", "nltest.exe", "dsquery.exe", "ipconfig.exe", "systeminfo.exe")
| summarize CommandCount = dcount(FileName), Commands = make_set(FileName) by DeviceName, AccountName, InitiatingProcessFileName
| where CommandCount >= 3
| sort by CommandCount desc

Sigma

Sigma is a platform-agnostic detection rule format written in YAML. It describes what to detect without being tied to a specific SIEM platform. Sigma rules are converted to SPL, KQL, Elastic Query DSL, and other formats using converters like pySigma. The SigmaHQ community maintains over 3,000 open-source rules covering the MITRE ATT&CK matrix.

SIGMAtitle: Reconnaissance Command Discovery Activity
id: a4b2c1d0-3e5f-4a8b-9c7d-1e2f3a4b5c6d
status: stable
description: Detects multiple discovery commands from a single shell session
logsource:
    category: process_creation
    product: windows
detection:
    selection_parent:
        ParentImage|endswith:
            - '\cmd.exe'
            - '\powershell.exe'
    selection_recon:
        Image|endswith:
            - '\whoami.exe'
            - '\net.exe'
            - '\nltest.exe'
            - '\ipconfig.exe'
            - '\systeminfo.exe'
    condition: selection_parent and selection_recon
level: medium

The advantage of Sigma is portability. Write once, convert to any platform. For organizations running multiple SIEM tools or planning a platform migration, Sigma provides a vendor-neutral detection library that survives tool changes.

MITRE ATT&CK Alignment

MITRE ATT&CK is the standard taxonomy for organizing detection coverage. Every detection rule should map to one or more ATT&CK techniques, creating a measurable relationship between your detection library and the adversary behaviors it covers.

The process works in both directions. Intelligence-driven detection starts with ATT&CK: identify the techniques most used by the threat actors targeting your sector, then build detections for each. Coverage-driven detection starts with your existing rules: map each to ATT&CK, visualize the gaps using ATT&CK Navigator, and prioritize new detections for uncovered techniques.

Coverage is not binary. For any given technique, you might have:

Most organizations find that their initial ATT&CK heatmap is sparse. That is expected and useful. The heatmap makes invisible gaps visible, which is the first step to closing them. Prioritize based on threat intelligence: cover the techniques your adversaries actually use before worrying about theoretical gaps.

The Detection Lifecycle

Building a detection rule is not a one-time event. Detections have a lifecycle that mirrors software development, and treating them as "write once, deploy forever" is how organizations end up with thousands of stale, noisy, or broken rules.

Phase 1: Requirements

Every detection starts with a requirement — a threat behavior that needs to be caught. Requirements come from threat intelligence (a new campaign targeting your sector), incident response (a technique observed during a real compromise), compliance mandates (regulations requiring detection of specific data access patterns), or coverage analysis (gaps in your ATT&CK heatmap). Clear requirements prevent the common antipattern of building detections for threats that are not relevant to your environment.

Phase 2: Data Assessment

Before writing a rule, validate that the required data exists. Can you detect this technique with the logs you currently collect? Are the necessary fields populated and normalized? What is the latency between event occurrence and SIEM ingestion? Many detection projects fail at this stage — the technique is real, but the data to detect it simply is not being collected. Data assessment prevents wasted engineering effort and identifies data onboarding priorities.

Phase 3: Rule Development

Write the detection logic in the appropriate language. Start broad, then refine. The first draft of a rule should be intentionally noisy — it is easier to add exclusions to a rule that catches everything than to identify what a too-narrow rule misses. Test against historical data to estimate alert volume and identify common false positive patterns.

Phase 4: Validation

Test the detection against realistic attack simulations. Does the rule fire when the technique is actually executed? Does it fire on all known variants? Atomic Red Team, Caldera, and commercial BAS (Breach and Attack Simulation) platforms provide automated technique execution for validation. If you cannot validate a detection, you cannot trust it.

Phase 5: Deployment and Tuning

Deploy to production and enter the tuning phase. Monitor alert volume, false positive rate, and analyst feedback. Tune iteratively: add allowlists for known-good processes, adjust thresholds, add contextual conditions. A detection rule is not finished when it is deployed — it is finished when analysts trust it enough to act on it without hesitation.

Phase 6: Maintenance

Review detections periodically. Data sources change, environments evolve, and attackers adapt. A detection that worked six months ago may no longer fire because a data source was decommissioned, a field name changed, or the attacker shifted to a variant the rule does not cover. Scheduled reviews prevent detection rot.

False Positive Tuning

False positives are the single largest operational cost in detection engineering. A rule that generates 500 alerts per day with a 95% false positive rate does not protect the organization — it trains analysts to ignore alerts. Tuning is not optional; it is a core engineering responsibility.

Baseline before deploying. Run the detection in a non-alerting mode for a week. Understand what normal looks like before you start alerting on anomalies. This prevents the common pattern of deploying a rule, getting flooded with alerts, and immediately disabling it.

Use allowlists, not blocklists. Instead of trying to enumerate every malicious variant (an impossible task), identify known-good processes and paths that trigger the rule legitimately and exclude them. Allowlists are smaller, more maintainable, and less likely to create blind spots.

Add context. A single process execution is rarely sufficient for a high-confidence alert. Add contextual conditions: the parent process, the command-line arguments, the user account, the time of day, the asset criticality. Each additional condition reduces false positives while preserving true positive coverage.

Implement threshold alerting. Instead of alerting on every individual reconnaissance command, alert when a host executes three or more discovery commands within five minutes. Thresholds reduce noise from legitimate administrative activity while still catching systematic reconnaissance.

Close the feedback loop. Track which detections generate the most false positives and which analysts consistently close as benign. This data drives tuning priorities and identifies rules that need rework or retirement.

Detection-as-Code

Detection-as-code applies software engineering practices to detection rule management. Instead of creating and editing rules directly in a SIEM console, detections are managed as code in a Git repository with the same rigor applied to production software.

The core practices include:

Detection-as-code does not require sophisticated tooling to start. A Git repository, a consistent rule format, and a peer review process provide most of the value. Automation (CI testing, automated deployment) can be layered on as the program matures.

How Threadlinqs Automates Detection Engineering

Threadlinqs Intelligence delivers production-ready detection rules for every tracked threat. Each threat in the platform includes SPL, KQL, and Sigma rules mapped to specific MITRE ATT&CK techniques, with annotations explaining the detection logic and expected false positive patterns.

The platform currently provides over 1,800 detection rules across 160+ threats, covering 465 MITRE ATT&CK techniques. Rules are maintained as threats evolve — when new variants are observed or new data sources become available, detections are updated and revalidated. A detection library with multi-select filtering lets engineers search by technique, severity, rule format, confidence level, and data source.


Frequently Asked Questions

What is the difference between detection engineering and threat hunting?

Detection engineering creates automated, persistent rules that monitor for known patterns 24/7. Threat hunting is a proactive, human-driven search for threats that evade existing detections. They form a feedback loop: hunts discover new threats, which become new automated detections, and detection gaps inform hunting priorities.

What languages do detection engineers use?

The three primary detection languages are Splunk SPL, Microsoft KQL, and Sigma (a platform-agnostic YAML format). Detection engineers also use YARA for file signatures, Snort/Suricata rules for network detection, and Python for automation and tooling around the detection pipeline.

What is detection-as-code?

Detection-as-code manages detection rules using software engineering practices: version control in Git, peer review via pull requests, automated testing in CI pipelines, and deployment via CD. It provides auditability, consistency, and the ability to roll back changes — the same benefits that software teams gain from infrastructure-as-code.

How do you reduce false positives in detection rules?

Baseline normal behavior before deploying, use allowlists for known-good processes, add contextual conditions (parent process, user, time), implement threshold alerting instead of single-event triggers, and maintain a feedback loop from analyst triage back to rule tuning. The goal is a signal-to-noise ratio that analysts trust enough to act on.

How does MITRE ATT&CK relate to detection engineering?

ATT&CK provides the organizing taxonomy for detection coverage. Every rule maps to ATT&CK techniques, enabling teams to visualize coverage gaps and prioritize new detections based on the techniques most relevant to their threat landscape. ATT&CK Navigator heatmaps make detection coverage measurable and communicable.

// author
Threadlinqs Intel Team
Security Engineer at Threadlinqs Intelligence. Researching active threats, building detection rules, and mapping adversary tradecraft across SPL, KQL, and Sigma.
medium.com/@hatim.bakkali10